Dec 08 09:14:35 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 09:14:35 crc restorecon[4657]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:35 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Dec 08 09:14:36 crc restorecon[4657]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Dec 08 09:14:36 crc kubenswrapper[4662]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 09:14:36 crc kubenswrapper[4662]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 09:14:36 crc kubenswrapper[4662]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 09:14:36 crc kubenswrapper[4662]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 09:14:36 crc kubenswrapper[4662]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 08 09:14:36 crc kubenswrapper[4662]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.528757 4662 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537231 4662 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537286 4662 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537295 4662 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537304 4662 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537312 4662 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537320 4662 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537331 4662 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537339 4662 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537348 4662 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537357 4662 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537365 4662 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537373 4662 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537381 4662 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537389 4662 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537396 4662 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537405 4662 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537413 4662 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537421 4662 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537429 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537436 4662 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537445 4662 feature_gate.go:330] unrecognized feature gate: Example Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537453 4662 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537460 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537471 4662 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537483 4662 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537492 4662 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537501 4662 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537510 4662 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537519 4662 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537528 4662 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537536 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537545 4662 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537553 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537575 4662 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537585 4662 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537596 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537604 4662 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537612 4662 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537619 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537627 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537634 4662 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537643 4662 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537651 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537659 4662 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537667 4662 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537675 4662 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537682 4662 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537693 4662 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537702 4662 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537711 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537720 4662 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537729 4662 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537773 4662 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537782 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537792 4662 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537803 4662 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537811 4662 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537820 4662 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537829 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537837 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537845 4662 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537853 4662 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537860 4662 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537868 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537876 4662 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537883 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537891 4662 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537899 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537907 4662 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537919 4662 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.537929 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538524 4662 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538553 4662 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538569 4662 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538581 4662 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538593 4662 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538603 4662 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538616 4662 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538628 4662 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538637 4662 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538647 4662 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538657 4662 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538671 4662 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538680 4662 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538690 4662 flags.go:64] FLAG: --cgroup-root="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538699 4662 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538708 4662 flags.go:64] FLAG: --client-ca-file="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538716 4662 flags.go:64] FLAG: --cloud-config="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538725 4662 flags.go:64] FLAG: --cloud-provider="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538734 4662 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538787 4662 flags.go:64] FLAG: --cluster-domain="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538796 4662 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538805 4662 flags.go:64] FLAG: --config-dir="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538814 4662 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538824 4662 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538836 4662 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538845 4662 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538855 4662 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538864 4662 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538876 4662 flags.go:64] FLAG: --contention-profiling="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538885 4662 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538894 4662 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538903 4662 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538912 4662 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538924 4662 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538933 4662 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538942 4662 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538951 4662 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538961 4662 flags.go:64] FLAG: --enable-server="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538970 4662 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538983 4662 flags.go:64] FLAG: --event-burst="100" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.538994 4662 flags.go:64] FLAG: --event-qps="50" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539004 4662 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539015 4662 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539024 4662 flags.go:64] FLAG: --eviction-hard="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539036 4662 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539045 4662 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539054 4662 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539066 4662 flags.go:64] FLAG: --eviction-soft="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539074 4662 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539084 4662 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539094 4662 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539103 4662 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539111 4662 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539120 4662 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539129 4662 flags.go:64] FLAG: --feature-gates="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539139 4662 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539148 4662 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539157 4662 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539167 4662 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539176 4662 flags.go:64] FLAG: --healthz-port="10248" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539186 4662 flags.go:64] FLAG: --help="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539195 4662 flags.go:64] FLAG: --hostname-override="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539203 4662 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539212 4662 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539221 4662 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539230 4662 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539239 4662 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539249 4662 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539258 4662 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539267 4662 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539276 4662 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539285 4662 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539295 4662 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539304 4662 flags.go:64] FLAG: --kube-reserved="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539313 4662 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539323 4662 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539332 4662 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539341 4662 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539351 4662 flags.go:64] FLAG: --lock-file="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539359 4662 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539368 4662 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539378 4662 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539409 4662 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539426 4662 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539437 4662 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539448 4662 flags.go:64] FLAG: --logging-format="text" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539461 4662 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539475 4662 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539487 4662 flags.go:64] FLAG: --manifest-url="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539499 4662 flags.go:64] FLAG: --manifest-url-header="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539515 4662 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539529 4662 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539543 4662 flags.go:64] FLAG: --max-pods="110" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539556 4662 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539568 4662 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539579 4662 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539591 4662 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539602 4662 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539611 4662 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539634 4662 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539656 4662 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539665 4662 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539674 4662 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539683 4662 flags.go:64] FLAG: --pod-cidr="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539691 4662 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539704 4662 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539714 4662 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539725 4662 flags.go:64] FLAG: --pods-per-core="0" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539734 4662 flags.go:64] FLAG: --port="10250" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539776 4662 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539785 4662 flags.go:64] FLAG: --provider-id="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539795 4662 flags.go:64] FLAG: --qos-reserved="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539804 4662 flags.go:64] FLAG: --read-only-port="10255" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539813 4662 flags.go:64] FLAG: --register-node="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539822 4662 flags.go:64] FLAG: --register-schedulable="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539830 4662 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539846 4662 flags.go:64] FLAG: --registry-burst="10" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539855 4662 flags.go:64] FLAG: --registry-qps="5" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539864 4662 flags.go:64] FLAG: --reserved-cpus="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539875 4662 flags.go:64] FLAG: --reserved-memory="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539885 4662 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539894 4662 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539904 4662 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539912 4662 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539921 4662 flags.go:64] FLAG: --runonce="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539930 4662 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539939 4662 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539949 4662 flags.go:64] FLAG: --seccomp-default="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539957 4662 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539966 4662 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539976 4662 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539985 4662 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.539995 4662 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540004 4662 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540012 4662 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540021 4662 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540030 4662 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540040 4662 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540049 4662 flags.go:64] FLAG: --system-cgroups="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540058 4662 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540073 4662 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540082 4662 flags.go:64] FLAG: --tls-cert-file="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540092 4662 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540106 4662 flags.go:64] FLAG: --tls-min-version="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540115 4662 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540124 4662 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540132 4662 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540141 4662 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540150 4662 flags.go:64] FLAG: --v="2" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540161 4662 flags.go:64] FLAG: --version="false" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540172 4662 flags.go:64] FLAG: --vmodule="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540183 4662 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.540192 4662 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540423 4662 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540434 4662 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540445 4662 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540455 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540473 4662 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540483 4662 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540492 4662 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540500 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540511 4662 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540522 4662 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540532 4662 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540541 4662 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540550 4662 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540558 4662 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540566 4662 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540573 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540581 4662 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540589 4662 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540598 4662 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540606 4662 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540616 4662 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540626 4662 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540636 4662 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540645 4662 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540655 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540665 4662 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540676 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540687 4662 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540697 4662 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540708 4662 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540720 4662 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540731 4662 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540777 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540789 4662 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540799 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540809 4662 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540820 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540829 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540839 4662 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540847 4662 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540855 4662 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540863 4662 feature_gate.go:330] unrecognized feature gate: Example Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540871 4662 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540879 4662 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540887 4662 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540894 4662 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540902 4662 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540910 4662 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540918 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540925 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540933 4662 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540941 4662 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540950 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540961 4662 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540971 4662 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540979 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540989 4662 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.540999 4662 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541008 4662 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541015 4662 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541023 4662 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541031 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541039 4662 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541047 4662 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541057 4662 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541068 4662 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541076 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541085 4662 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541095 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541103 4662 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.541111 4662 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.541329 4662 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.551034 4662 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.551267 4662 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551399 4662 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551461 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551522 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551586 4662 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551671 4662 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551718 4662 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551810 4662 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551869 4662 feature_gate.go:330] unrecognized feature gate: Example Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551927 4662 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.551980 4662 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552038 4662 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552100 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552169 4662 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552243 4662 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552303 4662 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552358 4662 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552450 4662 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552508 4662 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552656 4662 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552706 4662 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552771 4662 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552836 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552882 4662 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552924 4662 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.552972 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553016 4662 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553059 4662 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553101 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553144 4662 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553187 4662 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553231 4662 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553277 4662 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553321 4662 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553369 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553416 4662 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553460 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553510 4662 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553556 4662 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553606 4662 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553650 4662 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553702 4662 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553783 4662 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553839 4662 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553885 4662 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553939 4662 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.553992 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554036 4662 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554099 4662 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554152 4662 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554196 4662 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554243 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554288 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554330 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554375 4662 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554422 4662 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554472 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554529 4662 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554587 4662 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554642 4662 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554698 4662 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.554937 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555007 4662 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555078 4662 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555137 4662 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555197 4662 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555255 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555310 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555367 4662 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555423 4662 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555477 4662 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555536 4662 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.555601 4662 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555826 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555891 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555938 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.555981 4662 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556023 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556071 4662 feature_gate.go:330] unrecognized feature gate: PinnedImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556120 4662 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556165 4662 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556207 4662 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556255 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556299 4662 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556341 4662 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556417 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556464 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556518 4662 feature_gate.go:330] unrecognized feature gate: Example Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556574 4662 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556627 4662 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556685 4662 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556760 4662 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556832 4662 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556892 4662 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556937 4662 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.556982 4662 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557025 4662 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557068 4662 feature_gate.go:330] unrecognized feature gate: OVNObservability Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557110 4662 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557158 4662 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557207 4662 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557250 4662 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557293 4662 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557341 4662 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557385 4662 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557432 4662 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557480 4662 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557535 4662 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557581 4662 feature_gate.go:330] unrecognized feature gate: GatewayAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557626 4662 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557675 4662 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557729 4662 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557799 4662 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557863 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557920 4662 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.557973 4662 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558040 4662 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558099 4662 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558160 4662 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558217 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558280 4662 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558346 4662 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558403 4662 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558458 4662 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558509 4662 feature_gate.go:330] unrecognized feature gate: SignatureStores Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558555 4662 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558600 4662 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558654 4662 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558703 4662 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558764 4662 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558813 4662 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558858 4662 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558901 4662 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558953 4662 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.558999 4662 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559042 4662 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559086 4662 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559129 4662 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559172 4662 feature_gate.go:330] unrecognized feature gate: NewOLM Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559215 4662 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559262 4662 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559306 4662 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559348 4662 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.559394 4662 feature_gate.go:330] unrecognized feature gate: PlatformOperators Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.559441 4662 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.559792 4662 server.go:940] "Client rotation is on, will bootstrap in background" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.562774 4662 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.563128 4662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.563867 4662 server.go:997] "Starting client certificate rotation" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.563948 4662 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.564365 4662 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-11-11 05:50:56.001200568 +0000 UTC Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.564913 4662 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.569079 4662 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.571158 4662 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.571926 4662 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.583676 4662 log.go:25] "Validated CRI v1 runtime API" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.604844 4662 log.go:25] "Validated CRI v1 image API" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.606802 4662 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.609347 4662 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-12-08-09-08-24-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.609390 4662 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.622264 4662 manager.go:217] Machine: {Timestamp:2025-12-08 09:14:36.621006689 +0000 UTC m=+0.190034699 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2800000 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:07345261-8303-4980-8140-240ca8110023 BootID:8ede791b-f654-4671-ae51-71d01a124d69 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:34:bf:06 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:34:bf:06 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:81:b6:c6 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:38:9e:b1 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:85:73:1a Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:37:7c:ea Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:99:1b:4f Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e6:1b:02:d2:57:77 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:e1:86:16:bd:8b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.622471 4662 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.622626 4662 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.623152 4662 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.623315 4662 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.623354 4662 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.623555 4662 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.623568 4662 container_manager_linux.go:303] "Creating device plugin manager" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.623813 4662 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.623850 4662 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.624180 4662 state_mem.go:36] "Initialized new in-memory state store" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.624344 4662 server.go:1245] "Using root directory" path="/var/lib/kubelet" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.625057 4662 kubelet.go:418] "Attempting to sync node with API server" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.625084 4662 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.625112 4662 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.625131 4662 kubelet.go:324] "Adding apiserver pod source" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.625143 4662 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.627631 4662 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.628134 4662 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.628694 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.628737 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.628823 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.628904 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629030 4662 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629608 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629633 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629641 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629649 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629662 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629671 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629680 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629694 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629708 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629717 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629759 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629769 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.629986 4662 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.630690 4662 server.go:1280] "Started kubelet" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.630869 4662 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.631230 4662 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.631234 4662 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 09:14:36 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.633841 4662 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.635436 4662 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f32a820517d0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 09:14:36.630629647 +0000 UTC m=+0.199657647,LastTimestamp:2025-12-08 09:14:36.630629647 +0000 UTC m=+0.199657647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.637513 4662 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.637919 4662 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.639016 4662 volume_manager.go:287] "The desired_state_of_world populator starts" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.639042 4662 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.638078 4662 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 16:42:24.615002916 +0000 UTC Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.639207 4662 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 895h27m47.975801831s for next certificate rotation Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.639299 4662 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.641200 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.641301 4662 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.641335 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.641233 4662 server.go:460] "Adding debug handlers to kubelet server" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.643073 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="200ms" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.650218 4662 factory.go:55] Registering systemd factory Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.650546 4662 factory.go:221] Registration of the systemd container factory successfully Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652420 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652482 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652494 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652505 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652514 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652524 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652534 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652544 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652556 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652567 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652582 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652593 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652602 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652614 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652623 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652662 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652671 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652680 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652690 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652699 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652708 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652716 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652726 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652786 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652829 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652840 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652854 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652865 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652875 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652884 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652893 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652902 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652910 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652939 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652951 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652963 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652973 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652984 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.652995 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653007 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653017 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653027 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653043 4662 factory.go:153] Registering CRI-O factory Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653036 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653083 4662 factory.go:221] Registration of the crio container factory successfully Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653182 4662 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653223 4662 factory.go:103] Registering Raw factory Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653254 4662 manager.go:1196] Started watching for new ooms in manager Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653587 4662 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653615 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653627 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653637 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653648 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653658 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653667 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653678 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653690 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653701 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653717 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653733 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653766 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653777 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653787 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653797 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653807 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653818 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653830 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653839 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653849 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653860 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653870 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653881 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653891 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653901 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653914 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653923 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653934 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653943 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653952 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653961 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653970 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653979 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.653995 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654005 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654015 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654024 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654034 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654045 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654056 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654065 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654077 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654087 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654096 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654106 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654117 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654127 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654137 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654147 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654158 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654168 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654177 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654187 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654198 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654207 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654217 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654227 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654238 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654247 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654260 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654271 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654291 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654303 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654314 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654324 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654335 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654345 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654356 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654367 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654378 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654390 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654399 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654409 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654419 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654430 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654440 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654452 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654463 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654475 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654490 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654507 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654521 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654531 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654541 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654552 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654562 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654575 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654588 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654601 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654614 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654627 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654639 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654650 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654667 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654678 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654690 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654700 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654714 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654723 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654734 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654764 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654777 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654788 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654802 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654814 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654831 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654843 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654856 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654869 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654882 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654895 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654907 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654920 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654933 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654943 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654954 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654964 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654973 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654984 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.654994 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655006 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655020 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655055 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655067 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655077 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655089 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655098 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655124 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655626 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655649 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655662 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655673 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655684 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655698 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655717 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655731 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655788 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655804 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655818 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655829 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655841 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655853 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655864 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655880 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655891 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655904 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655915 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655927 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655939 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655951 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655960 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655972 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655985 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655996 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656007 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656020 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656029 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656041 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656054 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656065 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656077 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656091 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656113 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656126 4662 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656136 4662 reconstruct.go:97] "Volume reconstruction finished" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.656143 4662 reconciler.go:26] "Reconciler: start to sync state" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.655471 4662 manager.go:319] Starting recovery of all containers Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.671841 4662 manager.go:324] Recovery completed Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.682429 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.685989 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.686046 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.686059 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.687203 4662 cpu_manager.go:225] "Starting CPU manager" policy="none" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.687298 4662 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.687382 4662 state_mem.go:36] "Initialized new in-memory state store" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.694281 4662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.696083 4662 policy_none.go:49] "None policy: Start" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.696186 4662 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.696240 4662 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.696278 4662 kubelet.go:2335] "Starting kubelet main sync loop" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.696348 4662 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.697307 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.697461 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.697903 4662 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.697941 4662 state_mem.go:35] "Initializing new in-memory state store" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.742026 4662 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.753491 4662 manager.go:334] "Starting Device Plugin manager" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.753568 4662 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.753589 4662 server.go:79] "Starting device plugin registration server" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.756346 4662 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.756367 4662 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.756616 4662 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.756702 4662 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.756713 4662 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.766499 4662 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.796769 4662 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.796911 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.798146 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.798185 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.798199 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.798534 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.799038 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.799126 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.799731 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.799813 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.799828 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.800027 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.800187 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.800229 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.800864 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.800903 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.800915 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.801241 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.801487 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.801507 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.802144 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.802176 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.802194 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.802203 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.802425 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.802467 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.803243 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.803262 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.803270 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.806102 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.806143 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.806153 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.806304 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.806621 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.806668 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.807083 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.807115 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.807125 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.807314 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.807341 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.808195 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.808227 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.808236 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.808508 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.808536 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.808544 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: W1208 09:14:36.810456 4662 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/cpu.max": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/cpu.max: no such device Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.844873 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="400ms" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.856694 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.857885 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.857980 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858044 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858118 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858196 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858276 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858411 4662 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858096 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858582 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858609 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858634 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858656 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858757 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858826 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858889 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858915 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858940 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858966 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.858987 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: E1208 09:14:36.859437 4662 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960661 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960735 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960796 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960822 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960855 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960877 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960900 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960926 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960939 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960982 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961034 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.960950 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961066 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961089 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961007 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961112 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961167 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961142 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961189 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961161 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961212 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961109 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961211 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961248 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961269 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961276 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961288 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961353 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961366 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:36 crc kubenswrapper[4662]: I1208 09:14:36.961410 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.060262 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.064244 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.064311 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.064325 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.064375 4662 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 08 09:14:37 crc kubenswrapper[4662]: E1208 09:14:37.064989 4662 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.137137 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.154882 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.162304 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-ff6dc2dcb108377d543309ffd20a8dffad1ffd344537ff891b75f224f90b730d WatchSource:0}: Error finding container ff6dc2dcb108377d543309ffd20a8dffad1ffd344537ff891b75f224f90b730d: Status 404 returned error can't find the container with id ff6dc2dcb108377d543309ffd20a8dffad1ffd344537ff891b75f224f90b730d Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.164632 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.179830 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-cf96fa484b7e05c6ed587117c5261207b6ef7ea7bd1dc18893a7e76d250ede00 WatchSource:0}: Error finding container cf96fa484b7e05c6ed587117c5261207b6ef7ea7bd1dc18893a7e76d250ede00: Status 404 returned error can't find the container with id cf96fa484b7e05c6ed587117c5261207b6ef7ea7bd1dc18893a7e76d250ede00 Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.182001 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.189444 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.243829 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-5b3c3cac338b4e142568ab935e26e094e06ec28848d682e31298185bb322596b WatchSource:0}: Error finding container 5b3c3cac338b4e142568ab935e26e094e06ec28848d682e31298185bb322596b: Status 404 returned error can't find the container with id 5b3c3cac338b4e142568ab935e26e094e06ec28848d682e31298185bb322596b Dec 08 09:14:37 crc kubenswrapper[4662]: E1208 09:14:37.246207 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="800ms" Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.257534 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-8b7d33d82e54c49088853092f2a064024bce9cf252a2bfb803d51e43ac4054c8 WatchSource:0}: Error finding container 8b7d33d82e54c49088853092f2a064024bce9cf252a2bfb803d51e43ac4054c8: Status 404 returned error can't find the container with id 8b7d33d82e54c49088853092f2a064024bce9cf252a2bfb803d51e43ac4054c8 Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.261890 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-9d94800990cab26529de41542215e26c6306f21574a65dc34ac55663ba098a65 WatchSource:0}: Error finding container 9d94800990cab26529de41542215e26c6306f21574a65dc34ac55663ba098a65: Status 404 returned error can't find the container with id 9d94800990cab26529de41542215e26c6306f21574a65dc34ac55663ba098a65 Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.465636 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.466656 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.466682 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.466691 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.466711 4662 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 08 09:14:37 crc kubenswrapper[4662]: E1208 09:14:37.467183 4662 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.566762 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:37 crc kubenswrapper[4662]: E1208 09:14:37.566866 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.632222 4662 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.660246 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:37 crc kubenswrapper[4662]: E1208 09:14:37.660347 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:37 crc kubenswrapper[4662]: W1208 09:14:37.696968 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:37 crc kubenswrapper[4662]: E1208 09:14:37.697058 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.702606 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.702686 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9d94800990cab26529de41542215e26c6306f21574a65dc34ac55663ba098a65"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.704440 4662 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273" exitCode=0 Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.704489 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.704505 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"8b7d33d82e54c49088853092f2a064024bce9cf252a2bfb803d51e43ac4054c8"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.704575 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.706181 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.706263 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.706286 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.707131 4662 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6" exitCode=0 Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.707188 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.707214 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5b3c3cac338b4e142568ab935e26e094e06ec28848d682e31298185bb322596b"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.707304 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.708148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.708167 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.708177 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.709373 4662 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7" exitCode=0 Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.709450 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.709458 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.709491 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf96fa484b7e05c6ed587117c5261207b6ef7ea7bd1dc18893a7e76d250ede00"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.709670 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.711092 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.711162 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.711197 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.714615 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.714655 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.714671 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.718287 4662 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="8a819e827e003b01efd3f5d6351025d2f94beb466801dcd4fddb66a01105ce18" exitCode=0 Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.718410 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"8a819e827e003b01efd3f5d6351025d2f94beb466801dcd4fddb66a01105ce18"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.718505 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"ff6dc2dcb108377d543309ffd20a8dffad1ffd344537ff891b75f224f90b730d"} Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.718658 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.720317 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.720483 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:37 crc kubenswrapper[4662]: I1208 09:14:37.720683 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:37 crc kubenswrapper[4662]: E1208 09:14:37.907087 4662 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f32a820517d0f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 09:14:36.630629647 +0000 UTC m=+0.199657647,LastTimestamp:2025-12-08 09:14:36.630629647 +0000 UTC m=+0.199657647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 09:14:38 crc kubenswrapper[4662]: E1208 09:14:38.047255 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="1.6s" Dec 08 09:14:38 crc kubenswrapper[4662]: W1208 09:14:38.134316 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:38 crc kubenswrapper[4662]: E1208 09:14:38.134495 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.190:6443: connect: connection refused" logger="UnhandledError" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.267540 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.268529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.268560 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.268571 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.268593 4662 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 08 09:14:38 crc kubenswrapper[4662]: E1208 09:14:38.269085 4662 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.190:6443: connect: connection refused" node="crc" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.632222 4662 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.190:6443: connect: connection refused Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.675581 4662 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.722330 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.722374 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.722389 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.722405 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.723824 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.723847 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.723855 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.725501 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.725524 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.725533 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.725584 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.726378 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.726392 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.726399 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.728994 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.729013 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.729022 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.729030 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.730431 4662 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc" exitCode=0 Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.730503 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.730574 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.731175 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.731193 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.731200 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.737103 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"2d5d27b6cd44ae2caa61b083d6a44bd39c278be619f246dbf2c4747d99210f87"} Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.737197 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.737891 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.737912 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:38 crc kubenswrapper[4662]: I1208 09:14:38.737921 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.746214 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342"} Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.746277 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.747439 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.747484 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.747499 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.748951 4662 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934" exitCode=0 Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749029 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749029 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934"} Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749190 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749729 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749816 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749829 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749914 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749935 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.749944 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.870079 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.871421 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.871468 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.871481 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:39 crc kubenswrapper[4662]: I1208 09:14:39.871509 4662 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.561100 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.761066 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba"} Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.761127 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20"} Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.761139 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.761146 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58"} Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.761162 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb"} Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.761175 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.761296 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.762258 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.762294 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.762307 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.763619 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.763654 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:40 crc kubenswrapper[4662]: I1208 09:14:40.763667 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.768297 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2"} Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.768439 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.768503 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.769942 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.769998 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.770013 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.770039 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.770092 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.770132 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:41 crc kubenswrapper[4662]: I1208 09:14:41.861545 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.660054 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.660340 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.662403 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.662446 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.662460 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.666883 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.771291 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.771626 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.771665 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.773489 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.773529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.773549 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.773568 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.773617 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.773642 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.774118 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.774143 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.774156 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:42 crc kubenswrapper[4662]: I1208 09:14:42.974835 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.204877 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.215479 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.434730 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.774132 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.774326 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.775964 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.776805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.776881 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.776901 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.777925 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.777990 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.778024 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.778761 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.778805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:43 crc kubenswrapper[4662]: I1208 09:14:43.778826 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:44 crc kubenswrapper[4662]: I1208 09:14:44.776503 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:44 crc kubenswrapper[4662]: I1208 09:14:44.777361 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:44 crc kubenswrapper[4662]: I1208 09:14:44.777392 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:44 crc kubenswrapper[4662]: I1208 09:14:44.777403 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:46 crc kubenswrapper[4662]: I1208 09:14:46.435163 4662 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:14:46 crc kubenswrapper[4662]: I1208 09:14:46.435279 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:14:46 crc kubenswrapper[4662]: E1208 09:14:46.766689 4662 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 09:14:47 crc kubenswrapper[4662]: I1208 09:14:47.392173 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:14:47 crc kubenswrapper[4662]: I1208 09:14:47.392664 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:47 crc kubenswrapper[4662]: I1208 09:14:47.393991 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:47 crc kubenswrapper[4662]: I1208 09:14:47.394033 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:47 crc kubenswrapper[4662]: I1208 09:14:47.394046 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.311619 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.311889 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.313347 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.313389 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.313398 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.535216 4662 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.535315 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.594620 4662 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 09:14:48 crc kubenswrapper[4662]: I1208 09:14:48.594696 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 09:14:48 crc kubenswrapper[4662]: E1208 09:14:48.677715 4662 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 08 09:14:49 crc kubenswrapper[4662]: W1208 09:14:49.371420 4662 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Dec 08 09:14:49 crc kubenswrapper[4662]: I1208 09:14:49.371520 4662 trace.go:236] Trace[1014715031]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 09:14:39.369) (total time: 10001ms): Dec 08 09:14:49 crc kubenswrapper[4662]: Trace[1014715031]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (09:14:49.371) Dec 08 09:14:49 crc kubenswrapper[4662]: Trace[1014715031]: [10.001804713s] [10.001804713s] END Dec 08 09:14:49 crc kubenswrapper[4662]: E1208 09:14:49.371545 4662 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Dec 08 09:14:49 crc kubenswrapper[4662]: I1208 09:14:49.493626 4662 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 09:14:49 crc kubenswrapper[4662]: I1208 09:14:49.493702 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 09:14:49 crc kubenswrapper[4662]: I1208 09:14:49.498579 4662 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 09:14:49 crc kubenswrapper[4662]: I1208 09:14:49.498821 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 09:14:50 crc kubenswrapper[4662]: I1208 09:14:50.566151 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:50 crc kubenswrapper[4662]: I1208 09:14:50.566303 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:50 crc kubenswrapper[4662]: I1208 09:14:50.567539 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:50 crc kubenswrapper[4662]: I1208 09:14:50.567595 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:50 crc kubenswrapper[4662]: I1208 09:14:50.567607 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:52 crc kubenswrapper[4662]: I1208 09:14:52.772996 4662 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Dec 08 09:14:52 crc kubenswrapper[4662]: I1208 09:14:52.793247 4662 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Dec 08 09:14:52 crc kubenswrapper[4662]: I1208 09:14:52.839573 4662 csr.go:261] certificate signing request csr-6mpqj is approved, waiting to be issued Dec 08 09:14:52 crc kubenswrapper[4662]: I1208 09:14:52.850477 4662 csr.go:257] certificate signing request csr-6mpqj is issued Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.211223 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.211421 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.212641 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.212801 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.212907 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.216949 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.797501 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.798276 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.798309 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.798322 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.852144 4662 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-12-08 09:09:52 +0000 UTC, rotation deadline is 2026-10-08 16:43:57.365362097 +0000 UTC Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.852183 4662 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7303h29m3.513182055s for next certificate rotation Dec 08 09:14:53 crc kubenswrapper[4662]: I1208 09:14:53.854507 4662 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.482396 4662 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.482402 4662 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.483703 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="3.2s" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.485583 4662 trace.go:236] Trace[989471140]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 09:14:40.572) (total time: 13913ms): Dec 08 09:14:54 crc kubenswrapper[4662]: Trace[989471140]: ---"Objects listed" error: 13913ms (09:14:54.485) Dec 08 09:14:54 crc kubenswrapper[4662]: Trace[989471140]: [13.913200802s] [13.913200802s] END Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.485613 4662 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.486266 4662 trace.go:236] Trace[204032985]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 09:14:40.267) (total time: 14219ms): Dec 08 09:14:54 crc kubenswrapper[4662]: Trace[204032985]: ---"Objects listed" error: 14219ms (09:14:54.486) Dec 08 09:14:54 crc kubenswrapper[4662]: Trace[204032985]: [14.219048889s] [14.219048889s] END Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.486304 4662 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.487813 4662 trace.go:236] Trace[871027148]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 09:14:39.772) (total time: 14715ms): Dec 08 09:14:54 crc kubenswrapper[4662]: Trace[871027148]: ---"Objects listed" error: 14715ms (09:14:54.487) Dec 08 09:14:54 crc kubenswrapper[4662]: Trace[871027148]: [14.715636785s] [14.715636785s] END Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.487841 4662 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.593156 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.608627 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.636389 4662 apiserver.go:52] "Watching apiserver" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.640468 4662 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.641078 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.641561 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.641790 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.642038 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.641797 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.642325 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.642595 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.642587 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.642779 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.642897 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.645390 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.645429 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.646126 4662 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53740->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.646168 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:53740->192.168.126.11:17697: read: connection reset by peer" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.646393 4662 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.646415 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.647766 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.647985 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.648268 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.648489 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.649217 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.649261 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.649841 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.687928 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.708965 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.734424 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.734809 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xkcpj"] Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.735287 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.736737 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.742005 4662 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.742016 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.744014 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.756114 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.770704 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784660 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784701 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784719 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784762 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784784 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784799 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784814 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784829 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784844 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.784858 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.785167 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.785259 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.785432 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.785612 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.785814 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786079 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786201 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786229 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786250 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786275 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786295 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786316 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786439 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786462 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786478 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786493 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786514 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786532 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786548 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786563 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786579 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786597 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786611 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786615 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786654 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786674 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786694 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786714 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786733 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786776 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786819 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786837 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786857 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786876 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786888 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786903 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.786987 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787021 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787047 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787066 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787079 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787088 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787114 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787146 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787171 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787192 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787211 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787229 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787243 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787250 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787272 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787299 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787333 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787353 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787370 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787386 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787390 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787403 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787425 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787473 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787494 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787516 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787533 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787549 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787550 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787572 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787607 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787627 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787647 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787663 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787681 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787696 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787713 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.787731 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788333 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788399 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788422 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788480 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788503 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788527 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788543 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788562 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788578 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788603 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788620 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788640 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788658 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788685 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788705 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788724 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788782 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788816 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788846 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788870 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788892 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788910 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788929 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788948 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788967 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788985 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789002 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789021 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789044 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789064 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789101 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789120 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789138 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789157 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789180 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789205 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789231 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789258 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789286 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789318 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789348 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789387 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789412 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789437 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789456 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789472 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789489 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789508 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789525 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789544 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789563 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789612 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789634 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789656 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789677 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789700 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789719 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789757 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789779 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789801 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789818 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789837 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789853 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789872 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789889 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790053 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790075 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790096 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790118 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790136 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790154 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790173 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790206 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790235 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790267 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790286 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790312 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790361 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790390 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790412 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790436 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790461 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790483 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790507 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790531 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790554 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790576 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790597 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790622 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790650 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790671 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790693 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790717 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790762 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790786 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790811 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790834 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790855 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790876 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790899 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790924 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790946 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790967 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790991 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791015 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791035 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791053 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791074 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791094 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791115 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791133 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791150 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791172 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791214 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791235 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791252 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791272 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791291 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791310 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791330 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791350 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791370 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791387 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791403 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791461 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791491 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791513 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791534 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791554 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791578 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6tjk\" (UniqueName: \"kubernetes.io/projected/fe880ac0-787a-43d5-90a4-5e7fa966f71d-kube-api-access-w6tjk\") pod \"node-resolver-xkcpj\" (UID: \"fe880ac0-787a-43d5-90a4-5e7fa966f71d\") " pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791597 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791624 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791642 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fe880ac0-787a-43d5-90a4-5e7fa966f71d-hosts-file\") pod \"node-resolver-xkcpj\" (UID: \"fe880ac0-787a-43d5-90a4-5e7fa966f71d\") " pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791664 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791688 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791708 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791725 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791794 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791820 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791843 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791930 4662 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791942 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791953 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791964 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791976 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791987 4662 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791997 4662 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.792006 4662 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.792018 4662 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.792029 4662 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.792040 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.792052 4662 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.793835 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788452 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788711 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.788964 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789301 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789560 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789751 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.789931 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790114 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790286 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790422 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790480 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790616 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790681 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.790915 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791085 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791144 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791311 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791317 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791489 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791549 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791653 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791870 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.791924 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.792024 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.792513 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.793033 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.793409 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.793499 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.793836 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.794311 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.794645 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.795036 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.795486 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.795901 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.796895 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.796929 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.797239 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.797308 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.797443 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.797529 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.798031 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.798239 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.798381 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.798389 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.798500 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:14:55.298480166 +0000 UTC m=+18.867508156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.827335 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.827533 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.828115 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.828662 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.828877 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.828890 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.829372 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.829416 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.829625 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.829728 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.829795 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.829860 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.830029 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.798577 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.798806 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.799021 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.799198 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.830302 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.799259 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.799500 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.799638 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.830357 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.800090 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.800142 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.800867 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.800995 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.801703 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.801941 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.802023 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.802177 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.802536 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.802562 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.802709 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.802931 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.803034 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.803074 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.803228 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.803559 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.803619 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.803879 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.804041 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.804189 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.804342 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.804093 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.804877 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.804890 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.805069 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.830587 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.805551 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.805722 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.805796 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.806249 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.806340 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.806351 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.808571 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.808867 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.809155 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.809452 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.809667 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.809773 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.809915 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.810124 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.810639 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.811227 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.811406 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.811424 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.812240 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.813091 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.813235 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.813413 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.813839 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.814043 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.814255 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.814275 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.814458 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.814582 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.815029 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.815063 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.815107 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.815589 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.815723 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816011 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816153 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816368 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816415 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816477 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816487 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816495 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816624 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816646 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.816919 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.817110 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.817121 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.817183 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.818039 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.818146 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.825321 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.826159 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.826522 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.799897 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.830631 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.830644 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.830793 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831067 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831154 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831366 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831445 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831407 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831498 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831547 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831723 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831817 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.831908 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.832219 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.832458 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.832816 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.833157 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.833642 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.833707 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:55.333688757 +0000 UTC m=+18.902716927 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.835778 4662 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.836201 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.836289 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:55.33627215 +0000 UTC m=+18.905300140 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.837022 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.839350 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.842321 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.842444 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.842798 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.842831 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.852809 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.852956 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.852987 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.853080 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.853234 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.854684 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.855466 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.856230 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.860400 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.860526 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.860878 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.860908 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.862596 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.862614 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.862848 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.862986 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.863616 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.864158 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.865630 4662 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342" exitCode=255 Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.865687 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342"} Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.876545 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893403 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6tjk\" (UniqueName: \"kubernetes.io/projected/fe880ac0-787a-43d5-90a4-5e7fa966f71d-kube-api-access-w6tjk\") pod \"node-resolver-xkcpj\" (UID: \"fe880ac0-787a-43d5-90a4-5e7fa966f71d\") " pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893464 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893500 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fe880ac0-787a-43d5-90a4-5e7fa966f71d-hosts-file\") pod \"node-resolver-xkcpj\" (UID: \"fe880ac0-787a-43d5-90a4-5e7fa966f71d\") " pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893542 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893603 4662 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893617 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893630 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893644 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893658 4662 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893670 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893682 4662 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893695 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893707 4662 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893718 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893729 4662 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893761 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893775 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893789 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893801 4662 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893813 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893825 4662 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893836 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893848 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893860 4662 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893873 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893885 4662 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893897 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893908 4662 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893919 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893932 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893945 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893957 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893968 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893980 4662 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.893993 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894005 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894018 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894029 4662 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894041 4662 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894053 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894064 4662 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894077 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894089 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894100 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894111 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894123 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894133 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894146 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894159 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894173 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894186 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894201 4662 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894213 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894225 4662 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894237 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894249 4662 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894260 4662 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894271 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894283 4662 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894294 4662 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894306 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894317 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894329 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894341 4662 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894351 4662 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894362 4662 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894372 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894385 4662 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894395 4662 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894406 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894420 4662 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894431 4662 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894443 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894457 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894468 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894480 4662 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894491 4662 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894502 4662 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894512 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894524 4662 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894535 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894547 4662 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894560 4662 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894570 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894581 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894591 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894602 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894613 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894623 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894635 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894647 4662 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894659 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894670 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894680 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894691 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894702 4662 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894714 4662 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894725 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894736 4662 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894771 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894784 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894796 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894807 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894818 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894829 4662 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894840 4662 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894852 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894863 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894875 4662 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894886 4662 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894897 4662 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894908 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894919 4662 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894931 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894941 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894953 4662 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894970 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.894983 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895004 4662 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895014 4662 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895025 4662 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895036 4662 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895046 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895057 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895068 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895079 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895090 4662 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895100 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895111 4662 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895122 4662 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895132 4662 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895142 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895153 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895163 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895175 4662 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895186 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895198 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895210 4662 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895221 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895232 4662 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895242 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895252 4662 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895263 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895273 4662 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895284 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895296 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895305 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895315 4662 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895327 4662 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895338 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895350 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895361 4662 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895372 4662 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895385 4662 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895398 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895410 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895421 4662 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895432 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895444 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895455 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895470 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895483 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895497 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895508 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895521 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895532 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895545 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895557 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895568 4662 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895579 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895591 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895603 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895614 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895626 4662 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895639 4662 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895651 4662 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895663 4662 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895674 4662 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895686 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895697 4662 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895707 4662 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895718 4662 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895728 4662 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895867 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895954 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/fe880ac0-787a-43d5-90a4-5e7fa966f71d-hosts-file\") pod \"node-resolver-xkcpj\" (UID: \"fe880ac0-787a-43d5-90a4-5e7fa966f71d\") " pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.895969 4662 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.896034 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.915281 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.928610 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.967319 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.967870 4662 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.968147 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.968348 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.988971 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.989009 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.989021 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.989087 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:55.489070585 +0000 UTC m=+19.058098575 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.993929 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.993950 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.993961 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:54 crc kubenswrapper[4662]: E1208 09:14:54.993994 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:55.493983464 +0000 UTC m=+19.063011454 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.997117 4662 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:54 crc kubenswrapper[4662]: I1208 09:14:54.997138 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.002015 4662 scope.go:117] "RemoveContainer" containerID="ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.009981 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.014498 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6tjk\" (UniqueName: \"kubernetes.io/projected/fe880ac0-787a-43d5-90a4-5e7fa966f71d-kube-api-access-w6tjk\") pod \"node-resolver-xkcpj\" (UID: \"fe880ac0-787a-43d5-90a4-5e7fa966f71d\") " pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.015455 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.027958 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.054932 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.055953 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xkcpj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.132150 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.187448 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.210209 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.232925 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.253658 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.257801 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.280875 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.281483 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-5dzps"] Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.282001 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-g7wsp"] Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.282493 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-92hkj"] Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.282711 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.282948 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.283483 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.286939 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287234 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287328 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287423 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287468 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287553 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287665 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287824 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.287926 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.288027 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.288139 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.288464 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 08 09:14:55 crc kubenswrapper[4662]: W1208 09:14:55.290484 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-c8ed0809e2a574ecacd43efc8f40725bb88bf0531611c1f7e7bab953d0c532c7 WatchSource:0}: Error finding container c8ed0809e2a574ecacd43efc8f40725bb88bf0531611c1f7e7bab953d0c532c7: Status 404 returned error can't find the container with id c8ed0809e2a574ecacd43efc8f40725bb88bf0531611c1f7e7bab953d0c532c7 Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.291480 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.301353 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.302027 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:14:56.301981242 +0000 UTC m=+19.871009242 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.312184 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.333047 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.366911 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.385501 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.404809 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-kubelet\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.404866 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-system-cni-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.404884 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/adeadc12-d6e2-4168-a1c0-de79d16c8de9-cni-binary-copy\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.404945 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-cni-bin\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.404964 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab82f706-db87-4a73-9f90-c1fba510d034-cni-binary-copy\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.404998 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-etc-kubernetes\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405015 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-cnibin\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405032 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-cnibin\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405048 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-socket-dir-parent\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405103 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/adeadc12-d6e2-4168-a1c0-de79d16c8de9-kube-api-access-zzhlb\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405136 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405189 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-k8s-cni-cncf-io\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405205 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-hostroot\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405474 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-conf-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405576 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.405591 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.405654 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.405709 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:56.405690491 +0000 UTC m=+19.974718481 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405787 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-system-cni-dir\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405817 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-multus-certs\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405836 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e629796-86fa-4436-8a01-326fc70c7dc1-proxy-tls\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.405866 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:56.405858715 +0000 UTC m=+19.974886705 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405919 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405939 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab82f706-db87-4a73-9f90-c1fba510d034-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405954 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-os-release\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405975 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-netns\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.405991 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-daemon-config\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.406008 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0e629796-86fa-4436-8a01-326fc70c7dc1-rootfs\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.406122 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e629796-86fa-4436-8a01-326fc70c7dc1-mcd-auth-proxy-config\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.406152 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsqqt\" (UniqueName: \"kubernetes.io/projected/ab82f706-db87-4a73-9f90-c1fba510d034-kube-api-access-zsqqt\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.406173 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-cni-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.406191 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-os-release\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.406208 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-cni-multus\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.406249 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgvzf\" (UniqueName: \"kubernetes.io/projected/0e629796-86fa-4436-8a01-326fc70c7dc1-kube-api-access-hgvzf\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.412053 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.444723 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.460260 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.477935 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.500234 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.506813 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-socket-dir-parent\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.506876 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/adeadc12-d6e2-4168-a1c0-de79d16c8de9-kube-api-access-zzhlb\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.506903 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-k8s-cni-cncf-io\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.506936 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.506976 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-hostroot\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.506998 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-conf-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507027 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-system-cni-dir\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507052 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-multus-certs\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507079 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e629796-86fa-4436-8a01-326fc70c7dc1-proxy-tls\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507117 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507144 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507168 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab82f706-db87-4a73-9f90-c1fba510d034-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507190 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-os-release\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507212 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-daemon-config\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507236 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0e629796-86fa-4436-8a01-326fc70c7dc1-rootfs\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507269 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e629796-86fa-4436-8a01-326fc70c7dc1-mcd-auth-proxy-config\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507294 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-netns\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507317 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsqqt\" (UniqueName: \"kubernetes.io/projected/ab82f706-db87-4a73-9f90-c1fba510d034-kube-api-access-zsqqt\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507348 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-cni-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507372 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgvzf\" (UniqueName: \"kubernetes.io/projected/0e629796-86fa-4436-8a01-326fc70c7dc1-kube-api-access-hgvzf\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507396 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-os-release\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507424 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-cni-multus\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507448 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-kubelet\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507472 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-system-cni-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507495 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/adeadc12-d6e2-4168-a1c0-de79d16c8de9-cni-binary-copy\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507519 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-cni-bin\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507502 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-socket-dir-parent\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507544 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab82f706-db87-4a73-9f90-c1fba510d034-cni-binary-copy\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507628 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-etc-kubernetes\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507661 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-cnibin\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507689 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-cnibin\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507826 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-cnibin\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507862 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-etc-kubernetes\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.507897 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-cnibin\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.507235 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.508134 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.508151 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.508211 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:56.508187249 +0000 UTC m=+20.077215239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.508257 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-hostroot\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.508294 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-conf-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.508328 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-system-cni-dir\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.508358 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-multus-certs\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.508389 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ab82f706-db87-4a73-9f90-c1fba510d034-cni-binary-copy\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.508880 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-k8s-cni-cncf-io\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509052 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-os-release\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509150 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-run-netns\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509239 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-cni-multus\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509295 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-kubelet\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509349 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-system-cni-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509568 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-cni-dir\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509583 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-host-var-lib-cni-bin\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.509191 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e629796-86fa-4436-8a01-326fc70c7dc1-mcd-auth-proxy-config\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.510040 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/adeadc12-d6e2-4168-a1c0-de79d16c8de9-cni-binary-copy\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.510191 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.510264 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.510316 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/ab82f706-db87-4a73-9f90-c1fba510d034-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.510322 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.510414 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:56.510396313 +0000 UTC m=+20.079424303 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.510514 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/adeadc12-d6e2-4168-a1c0-de79d16c8de9-os-release\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.510601 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0e629796-86fa-4436-8a01-326fc70c7dc1-rootfs\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.510626 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/adeadc12-d6e2-4168-a1c0-de79d16c8de9-multus-daemon-config\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.510675 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ab82f706-db87-4a73-9f90-c1fba510d034-tuning-conf-dir\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.514179 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.517525 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e629796-86fa-4436-8a01-326fc70c7dc1-proxy-tls\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.532893 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsqqt\" (UniqueName: \"kubernetes.io/projected/ab82f706-db87-4a73-9f90-c1fba510d034-kube-api-access-zsqqt\") pod \"multus-additional-cni-plugins-g7wsp\" (UID: \"ab82f706-db87-4a73-9f90-c1fba510d034\") " pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.533268 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhlb\" (UniqueName: \"kubernetes.io/projected/adeadc12-d6e2-4168-a1c0-de79d16c8de9-kube-api-access-zzhlb\") pod \"multus-92hkj\" (UID: \"adeadc12-d6e2-4168-a1c0-de79d16c8de9\") " pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.537639 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.546256 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgvzf\" (UniqueName: \"kubernetes.io/projected/0e629796-86fa-4436-8a01-326fc70c7dc1-kube-api-access-hgvzf\") pod \"machine-config-daemon-5dzps\" (UID: \"0e629796-86fa-4436-8a01-326fc70c7dc1\") " pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.549653 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.573211 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.597308 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.610617 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-92hkj" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.623972 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.644341 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" Dec 08 09:14:55 crc kubenswrapper[4662]: W1208 09:14:55.655892 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e629796_86fa_4436_8a01_326fc70c7dc1.slice/crio-8ce3ceda90d907c1820cd53d0f014c66b17ad6f63d26e41130360fdaa98a5f20 WatchSource:0}: Error finding container 8ce3ceda90d907c1820cd53d0f014c66b17ad6f63d26e41130360fdaa98a5f20: Status 404 returned error can't find the container with id 8ce3ceda90d907c1820cd53d0f014c66b17ad6f63d26e41130360fdaa98a5f20 Dec 08 09:14:55 crc kubenswrapper[4662]: W1208 09:14:55.668879 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab82f706_db87_4a73_9f90_c1fba510d034.slice/crio-5be2e9388093b87098b13e35746bfd81f03bf02f0c2c8e2cf50c589f967d8085 WatchSource:0}: Error finding container 5be2e9388093b87098b13e35746bfd81f03bf02f0c2c8e2cf50c589f967d8085: Status 404 returned error can't find the container with id 5be2e9388093b87098b13e35746bfd81f03bf02f0c2c8e2cf50c589f967d8085 Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.676975 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fhz87"] Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.678508 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.684217 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.688056 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.688299 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.688404 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.688654 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.688802 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.688942 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.696613 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:55 crc kubenswrapper[4662]: E1208 09:14:55.696824 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.726899 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.783294 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814411 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-bin\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814523 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-env-overrides\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814542 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-netns\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814559 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovn-node-metrics-cert\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814597 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-kubelet\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814611 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-systemd-units\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814626 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-var-lib-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814641 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-netd\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814683 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-log-socket\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814704 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-ovn-kubernetes\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814721 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcn6q\" (UniqueName: \"kubernetes.io/projected/8d221fdb-50ee-4a2a-9db5-30e79f604466-kube-api-access-zcn6q\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814756 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-etc-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814774 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-ovn\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814790 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-node-log\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814809 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814932 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-config\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.814956 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.815074 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-systemd\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.815096 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-script-lib\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.815130 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-slash\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.832980 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.854672 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.877696 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerStarted","Data":"5be2e9388093b87098b13e35746bfd81f03bf02f0c2c8e2cf50c589f967d8085"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.880000 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.883857 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.887099 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.887422 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.891965 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.892005 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"8ce3ceda90d907c1820cd53d0f014c66b17ad6f63d26e41130360fdaa98a5f20"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.893132 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.896062 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerStarted","Data":"991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.896221 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerStarted","Data":"797fc56a8c7bb406ce532de4a4424767ac83c0a0404f73aeb6676341b4c38161"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.897179 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"8ff57532eae93856327db0438dcf68482ff2278beb124cff3a2763fbb6d96908"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.898191 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.898295 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c8ed0809e2a574ecacd43efc8f40725bb88bf0531611c1f7e7bab953d0c532c7"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.899900 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xkcpj" event={"ID":"fe880ac0-787a-43d5-90a4-5e7fa966f71d","Type":"ContainerStarted","Data":"808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.899990 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xkcpj" event={"ID":"fe880ac0-787a-43d5-90a4-5e7fa966f71d","Type":"ContainerStarted","Data":"c45bb37da03c4dbcb178abeb16e831f9a6967b6a66d65333bda88c18b819482e"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.904806 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.904929 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.905135 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.905152 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"179fe49092ce8213338a819accac03bad960423d6a40272030eb13f2ca7f07ee"} Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916066 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-etc-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916117 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-ovn\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916147 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-node-log\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916169 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-ovn-kubernetes\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916194 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcn6q\" (UniqueName: \"kubernetes.io/projected/8d221fdb-50ee-4a2a-9db5-30e79f604466-kube-api-access-zcn6q\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916219 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916243 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-config\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916267 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916320 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-slash\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916342 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-systemd\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916372 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-script-lib\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916396 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-bin\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916447 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-env-overrides\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916480 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-netns\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916498 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovn-node-metrics-cert\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916516 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-kubelet\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916536 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-systemd-units\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916555 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-var-lib-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916583 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-netd\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916601 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-log-socket\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916665 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-log-socket\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916718 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-etc-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916773 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-ovn\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916799 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-node-log\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.916820 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-ovn-kubernetes\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.917064 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.918457 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-config\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.918505 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.918534 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-slash\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.918556 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-systemd\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.919671 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-netns\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.919799 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-systemd-units\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.919840 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-kubelet\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.919881 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-bin\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.919917 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-var-lib-openvswitch\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.919950 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-netd\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.920546 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-script-lib\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.920663 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-env-overrides\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.925349 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovn-node-metrics-cert\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.925613 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.936677 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcn6q\" (UniqueName: \"kubernetes.io/projected/8d221fdb-50ee-4a2a-9db5-30e79f604466-kube-api-access-zcn6q\") pod \"ovnkube-node-fhz87\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.945104 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.968386 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:55 crc kubenswrapper[4662]: I1208 09:14:55.994790 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:55Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.003874 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.021233 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d221fdb_50ee_4a2a_9db5_30e79f604466.slice/crio-2b12324e25d3dc32f97d7c4e653f6b9fadb7a16ccad359400cb4b43c51eef36d WatchSource:0}: Error finding container 2b12324e25d3dc32f97d7c4e653f6b9fadb7a16ccad359400cb4b43c51eef36d: Status 404 returned error can't find the container with id 2b12324e25d3dc32f97d7c4e653f6b9fadb7a16ccad359400cb4b43c51eef36d Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.030172 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.078020 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.116645 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.145887 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.163910 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.180637 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.194662 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.216051 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.232208 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.264105 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.279324 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.306434 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.320178 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.320472 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.320328 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:14:58.320300519 +0000 UTC m=+21.889328509 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.332014 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.348508 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.421691 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.421763 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.421849 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.421905 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:58.421889236 +0000 UTC m=+21.990917226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.422151 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.422265 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:58.422252865 +0000 UTC m=+21.991280855 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.522485 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.522796 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.522669 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.522935 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.522910 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.523075 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.523096 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.522997 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.523155 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:58.523138495 +0000 UTC m=+22.092166485 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.523173 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:14:58.523165905 +0000 UTC m=+22.092193895 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.565403 4662 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.566798 4662 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": Unexpected watch close - watch lasted less than a second and no items received Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.566882 4662 reflector.go:484] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.566915 4662 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": Unexpected watch close - watch lasted less than a second and no items received Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.567038 4662 reflector.go:484] object-"openshift-ovn-kubernetes"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.567198 4662 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovnkube-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovnkube-config": Unexpected watch close - watch lasted less than a second and no items received Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.567461 4662 reflector.go:484] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": Unexpected watch close - watch lasted less than a second and no items received Dec 08 09:14:56 crc kubenswrapper[4662]: W1208 09:14:56.567501 4662 reflector.go:484] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.698566 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.698722 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.698853 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:56 crc kubenswrapper[4662]: E1208 09:14:56.698928 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.701835 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.702800 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.703659 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.704547 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.706070 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.706641 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.707783 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.708337 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.709629 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.710136 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.711093 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.711885 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.712925 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.713415 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.715513 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.716138 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.717370 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.717462 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.718006 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.718729 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.720186 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.722189 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.722912 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.723412 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.724669 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.725300 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.726411 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.727205 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.728251 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.729142 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.730359 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.731075 4662 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.731215 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.734014 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.734914 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.735462 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.737655 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.738917 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.739652 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.741162 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.741557 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.742520 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.743658 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.744457 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.745953 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.746815 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.747884 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.748537 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.751157 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.752182 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.753555 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.754160 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.755357 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.756063 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.757064 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.758161 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.763122 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.802599 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.830329 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.850713 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.873519 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.895256 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.912683 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.915511 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c" exitCode=0 Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.915607 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c"} Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.915685 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"2b12324e25d3dc32f97d7c4e653f6b9fadb7a16ccad359400cb4b43c51eef36d"} Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.917314 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0"} Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.925716 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab82f706-db87-4a73-9f90-c1fba510d034" containerID="979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8" exitCode=0 Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.925998 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerDied","Data":"979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8"} Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.928895 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:56 crc kubenswrapper[4662]: I1208 09:14:56.966079 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.001900 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.030520 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.057197 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.084621 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.102937 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.123511 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.141467 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.159164 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.174338 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.189107 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.201795 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.224082 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.237631 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.249882 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.266389 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.444976 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.494717 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.682670 4662 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.685042 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.685087 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.685097 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.685170 4662 kubelet_node_status.go:76] "Attempting to register node" node="crc" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.696944 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:57 crc kubenswrapper[4662]: E1208 09:14:57.697058 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.697535 4662 kubelet_node_status.go:115] "Node was previously registered" node="crc" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.697673 4662 kubelet_node_status.go:79] "Successfully registered node" node="crc" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.698929 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.698956 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.698966 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.698982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.698993 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:57Z","lastTransitionTime":"2025-12-08T09:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:57 crc kubenswrapper[4662]: E1208 09:14:57.718944 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.723411 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.723442 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.723453 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.723469 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.723481 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:57Z","lastTransitionTime":"2025-12-08T09:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:57 crc kubenswrapper[4662]: E1208 09:14:57.735658 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.739413 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.739446 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.739456 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.739471 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.739481 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:57Z","lastTransitionTime":"2025-12-08T09:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:57 crc kubenswrapper[4662]: E1208 09:14:57.752780 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.758564 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.758598 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.758609 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.758625 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.758635 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:57Z","lastTransitionTime":"2025-12-08T09:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.762190 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 08 09:14:57 crc kubenswrapper[4662]: E1208 09:14:57.775203 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.779256 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.779295 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.779306 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.779320 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.779333 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:57Z","lastTransitionTime":"2025-12-08T09:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:57 crc kubenswrapper[4662]: E1208 09:14:57.794332 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: E1208 09:14:57.794483 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.796630 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.796662 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.796671 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.796690 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.796702 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:57Z","lastTransitionTime":"2025-12-08T09:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.899517 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.899817 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.899828 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.899854 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.899865 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:57Z","lastTransitionTime":"2025-12-08T09:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.920048 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.932278 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab82f706-db87-4a73-9f90-c1fba510d034" containerID="005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023" exitCode=0 Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.932342 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerDied","Data":"005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.935222 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.938762 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.938929 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.939026 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.939117 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.939209 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.939324 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03"} Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.941058 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.958195 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.968915 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.972141 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:57 crc kubenswrapper[4662]: I1208 09:14:57.987785 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:57Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.004361 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.004465 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.004483 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.004519 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.004535 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.005842 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.021815 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.023248 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-wzjpk"] Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.023785 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.027419 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.027471 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.028694 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.028957 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.035490 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.050104 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.065626 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.087335 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.087454 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.101075 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.106879 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.106902 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.106910 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.106923 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.106931 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.112602 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.126511 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.138399 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-serviceca\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.138458 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wr5s\" (UniqueName: \"kubernetes.io/projected/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-kube-api-access-5wr5s\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.138506 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-host\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.138622 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.154016 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.173429 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.187816 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.199242 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.208967 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.208995 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.209005 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.209020 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.209035 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.210856 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.220402 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.232076 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.239654 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-host\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.239702 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-serviceca\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.239735 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wr5s\" (UniqueName: \"kubernetes.io/projected/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-kube-api-access-5wr5s\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.239959 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-host\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.241115 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-serviceca\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.245524 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.264644 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wr5s\" (UniqueName: \"kubernetes.io/projected/f65a592a-5cfd-40ce-9ec9-aa26409e7b74-kube-api-access-5wr5s\") pod \"node-ca-wzjpk\" (UID: \"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\") " pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.267421 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.279879 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.293795 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.308246 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.310668 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.310823 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.310959 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.311081 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.311212 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.323207 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.338806 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-wzjpk" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.340144 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.340302 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:15:02.340273969 +0000 UTC m=+25.909301959 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.342027 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.343464 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 09:14:58 crc kubenswrapper[4662]: W1208 09:14:58.357007 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65a592a_5cfd_40ce_9ec9_aa26409e7b74.slice/crio-6938f00b6a751bedbadb4693478ece17c16c88dd7612f4c3c1c82933971efad3 WatchSource:0}: Error finding container 6938f00b6a751bedbadb4693478ece17c16c88dd7612f4c3c1c82933971efad3: Status 404 returned error can't find the container with id 6938f00b6a751bedbadb4693478ece17c16c88dd7612f4c3c1c82933971efad3 Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.367944 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.372968 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.384386 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.404713 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.416618 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.416666 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.416677 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.416695 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.416712 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.422725 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.437610 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.441087 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.441173 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.441265 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.441316 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.441353 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:02.441337063 +0000 UTC m=+26.010365043 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.441408 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:02.441363184 +0000 UTC m=+26.010391234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.457168 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.472118 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.484970 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.495450 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.508051 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.518951 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.518994 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.519007 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.519024 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.519035 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.541683 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.541763 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.541903 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.541928 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.541940 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.541989 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:02.541973247 +0000 UTC m=+26.111001237 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.542282 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.542304 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.542313 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.542344 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:02.542334726 +0000 UTC m=+26.111362716 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.549661 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.588934 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.621508 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.621535 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.621544 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.621556 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.621565 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.628429 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.670384 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.697045 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.697239 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.697622 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:14:58 crc kubenswrapper[4662]: E1208 09:14:58.697806 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.708156 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.723401 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.723453 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.723470 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.723488 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.723501 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.755702 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.787440 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.826204 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.826243 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.826252 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.826266 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.826276 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.832813 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.870225 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.908427 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.928596 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.928653 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.928679 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.928707 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.928726 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:58Z","lastTransitionTime":"2025-12-08T09:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.944883 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab82f706-db87-4a73-9f90-c1fba510d034" containerID="320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869" exitCode=0 Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.944962 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerDied","Data":"320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.947057 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wzjpk" event={"ID":"f65a592a-5cfd-40ce-9ec9-aa26409e7b74","Type":"ContainerStarted","Data":"db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.947126 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-wzjpk" event={"ID":"f65a592a-5cfd-40ce-9ec9-aa26409e7b74","Type":"ContainerStarted","Data":"6938f00b6a751bedbadb4693478ece17c16c88dd7612f4c3c1c82933971efad3"} Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.958463 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:58 crc kubenswrapper[4662]: I1208 09:14:58.991011 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:58Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.031243 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.032673 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.032712 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.032721 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.032768 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.032780 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.077869 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.111904 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.135161 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.135207 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.135217 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.135233 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.135245 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.159035 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.188388 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.235879 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.238204 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.238241 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.238255 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.238279 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.238292 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.270713 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.311718 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.342490 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.342558 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.342574 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.342598 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.342627 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.354352 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.390874 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.428666 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.444678 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.444719 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.444730 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.444768 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.444779 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.470489 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.508754 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.547623 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.547665 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.547675 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.547692 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.547707 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.553837 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.589854 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.630154 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.650096 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.650147 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.650157 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.650175 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.650186 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.686285 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.697390 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:14:59 crc kubenswrapper[4662]: E1208 09:14:59.697516 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.711560 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.752874 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.752934 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.752949 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.752968 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.752981 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.757119 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.791294 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.832068 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.855855 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.855891 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.855899 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.855914 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.855924 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.869246 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.936910 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.954003 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab82f706-db87-4a73-9f90-c1fba510d034" containerID="39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e" exitCode=0 Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.954042 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerDied","Data":"39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.958144 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.958205 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.958216 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.958252 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.958263 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:14:59Z","lastTransitionTime":"2025-12-08T09:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:14:59 crc kubenswrapper[4662]: I1208 09:14:59.979395 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.002190 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:14:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.044234 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.061569 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.061631 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.061649 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.061676 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.061690 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.069768 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.110489 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.150417 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.164733 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.164795 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.164805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.164822 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.164833 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.188921 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.232154 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.268189 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.268248 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.268263 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.268285 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.268303 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.275052 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.312785 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.352086 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.371240 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.371308 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.371327 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.371354 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.371372 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.388132 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.433733 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.472813 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.478885 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.478939 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.478960 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.478986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.479004 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.510793 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.581999 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.582046 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.582057 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.582075 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.582086 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.684650 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.684788 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.684819 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.684922 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.684946 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.696974 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.697028 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:00 crc kubenswrapper[4662]: E1208 09:15:00.697156 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:00 crc kubenswrapper[4662]: E1208 09:15:00.697282 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.788252 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.788290 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.788303 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.788321 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.788333 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.891002 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.891043 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.891054 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.891070 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:00 crc kubenswrapper[4662]: I1208 09:15:00.891082 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:00Z","lastTransitionTime":"2025-12-08T09:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.611207 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.611726 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.611782 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.611828 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.611866 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612029 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612048 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612062 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612115 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:10.612098558 +0000 UTC m=+34.181126568 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612267 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612293 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612306 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612373 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612384 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:10.612363694 +0000 UTC m=+34.181391794 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612412 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:10.612399475 +0000 UTC m=+34.181427565 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612508 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612544 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:10.612535288 +0000 UTC m=+34.181563398 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.612544 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612656 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:15:10.612640221 +0000 UTC m=+34.181668231 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.612647 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.613890 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.613939 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.613957 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.613976 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.613990 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:02Z","lastTransitionTime":"2025-12-08T09:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.628822 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerStarted","Data":"3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8"} Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.637685 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.637907 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.637972 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc"} Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.638098 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:02 crc kubenswrapper[4662]: E1208 09:15:02.638264 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.674030 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.716304 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.716356 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.716369 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.716391 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.716406 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:02Z","lastTransitionTime":"2025-12-08T09:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.717353 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.747921 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.765758 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.783292 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.796052 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.807608 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.818798 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.818833 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.818842 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.818858 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.818874 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:02Z","lastTransitionTime":"2025-12-08T09:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.822503 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.851826 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.866069 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.901175 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.921664 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.921712 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.921726 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.921761 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.921774 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:02Z","lastTransitionTime":"2025-12-08T09:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.928865 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.942210 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.954549 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:02 crc kubenswrapper[4662]: I1208 09:15:02.968090 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:02Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.024368 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.024414 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.024426 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.024458 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.024473 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.126886 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.126927 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.126936 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.126951 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.126962 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.229441 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.229705 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.229719 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.229735 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.229764 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.332396 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.332496 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.332513 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.332534 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.332548 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.435201 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.435241 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.435250 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.435265 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.435274 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.537990 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.538033 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.538045 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.538066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.538081 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.640487 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.641462 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.641539 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.641605 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.641665 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.649319 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab82f706-db87-4a73-9f90-c1fba510d034" containerID="3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8" exitCode=0 Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.649402 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerDied","Data":"3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.662697 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.664035 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.664127 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.669010 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.693966 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.696392 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:03 crc kubenswrapper[4662]: E1208 09:15:03.696518 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.696566 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:03 crc kubenswrapper[4662]: E1208 09:15:03.696603 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.710871 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.728503 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.729418 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.734122 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.743927 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.743963 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.743974 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.743991 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.744004 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.746000 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.771549 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.786378 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.800938 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.825041 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.838423 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.847577 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.847640 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.847656 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.847679 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.847699 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.859925 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.877095 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.891371 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.906104 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.918769 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.948584 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.950037 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.950066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.950075 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.950136 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.950147 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:03Z","lastTransitionTime":"2025-12-08T09:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.977950 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:03 crc kubenswrapper[4662]: I1208 09:15:03.997097 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:03Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.011867 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.026333 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.041052 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.053397 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.053449 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.053463 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.053487 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.053499 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.055795 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.070428 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.090436 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.103725 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.121245 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.137967 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.154987 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.157344 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.157398 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.157410 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.157434 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.157447 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.169766 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.185302 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.260331 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.260365 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.260374 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.260387 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.260396 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.362431 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.362472 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.362486 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.362500 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.362511 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.464395 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.464433 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.464475 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.464493 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.464504 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.567225 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.567261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.567269 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.567283 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.567293 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.668964 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.669003 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.669011 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.669025 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.669035 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.669435 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab82f706-db87-4a73-9f90-c1fba510d034" containerID="5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646" exitCode=0 Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.669563 4662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.669536 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerDied","Data":"5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.697291 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:04 crc kubenswrapper[4662]: E1208 09:15:04.697432 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.698499 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.716062 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.735291 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.747590 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.762471 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.773998 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.774039 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.774050 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.774065 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.774080 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.777310 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.788161 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.800019 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.810204 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.823388 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.836909 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.846945 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.861683 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.873315 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.879354 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.879410 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.879422 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.879438 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.879449 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.885676 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:04Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.982866 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.982906 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.982919 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.982935 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:04 crc kubenswrapper[4662]: I1208 09:15:04.982946 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:04Z","lastTransitionTime":"2025-12-08T09:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.086237 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.086309 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.086332 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.086362 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.086386 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.189078 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.189115 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.189126 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.189141 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.189153 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.292396 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.292709 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.292721 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.292752 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.292766 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.395201 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.395240 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.395253 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.395268 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.395279 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.497794 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.497840 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.497849 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.497864 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.497873 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.600617 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.600659 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.600681 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.600701 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.600713 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.678799 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" event={"ID":"ab82f706-db87-4a73-9f90-c1fba510d034","Type":"ContainerStarted","Data":"7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.679319 4662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.697141 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:05 crc kubenswrapper[4662]: E1208 09:15:05.697691 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.697565 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.697374 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:05 crc kubenswrapper[4662]: E1208 09:15:05.698356 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.703464 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.703515 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.703526 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.703540 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.703550 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.712403 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.726145 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.747733 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.764964 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.783563 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.795328 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.806773 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.806806 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.806815 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.806829 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.806840 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.808068 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.822254 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.832180 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.844858 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.865059 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.878033 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.894499 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.904500 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.908816 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.908846 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.908854 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.908866 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:05 crc kubenswrapper[4662]: I1208 09:15:05.908875 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:05Z","lastTransitionTime":"2025-12-08T09:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.011813 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.011867 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.011882 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.011899 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.011909 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.114407 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.114433 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.114441 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.114454 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.114462 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.217126 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.217446 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.217596 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.217779 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.217936 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.321221 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.321268 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.321281 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.321298 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.321311 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.423317 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.423358 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.423368 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.423384 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.423396 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.525466 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.525497 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.525506 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.525519 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.525528 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.627288 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.627323 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.627334 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.627350 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.627362 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.685920 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/0.log" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.691379 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7" exitCode=1 Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.691933 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.692411 4662 scope.go:117] "RemoveContainer" containerID="64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.696717 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:06 crc kubenswrapper[4662]: E1208 09:15:06.696889 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.707917 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.728371 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.729993 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.730064 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.730083 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.730112 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.730131 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.742133 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.755381 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.771040 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.783921 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.796694 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.804796 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.817575 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.828398 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.833129 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.833159 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.833169 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.833185 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.833195 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.840302 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.854417 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.877419 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.893623 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.914317 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.931923 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.936018 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.936067 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.936081 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.936104 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.936119 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:06Z","lastTransitionTime":"2025-12-08T09:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.951549 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.967324 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.983506 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:06 crc kubenswrapper[4662]: I1208 09:15:06.995820 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:06Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.010597 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.024978 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.038686 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.038971 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.039039 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.039053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.039095 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.039112 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.058341 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.078895 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.097811 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.124191 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.141721 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.141768 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.141778 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.141791 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.141802 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.143992 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.158624 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.174917 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.244787 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.244836 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.244847 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.244863 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.244874 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.347244 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.347284 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.347296 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.347314 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.347326 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.449537 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.449577 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.449588 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.449603 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.449615 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.552604 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.552655 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.552666 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.552686 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.552696 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.654719 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.654784 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.654793 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.654807 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.654816 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.696056 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/0.log" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.696504 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.696538 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:07 crc kubenswrapper[4662]: E1208 09:15:07.696611 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:07 crc kubenswrapper[4662]: E1208 09:15:07.696679 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.698785 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.698892 4662 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.709222 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.720149 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.742032 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.757227 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.757277 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.757291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.757314 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.757330 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.761327 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.781185 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.796547 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.808144 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.823370 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.836852 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.852682 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.859857 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.859895 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.859906 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.859922 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.859934 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.873323 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.908766 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.935963 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.957017 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.961227 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.961253 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.961261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.961274 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.961282 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:07Z","lastTransitionTime":"2025-12-08T09:15:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:07 crc kubenswrapper[4662]: I1208 09:15:07.975164 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:07Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.063717 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.063840 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.063864 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.063895 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.063924 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.136078 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c"] Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.136814 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.139374 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.139654 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.145487 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.145518 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.145527 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.145540 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.145549 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.155943 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.159713 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.161987 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/daba0096-67d9-468a-a1fa-97fc0fa45ff1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.162035 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/daba0096-67d9-468a-a1fa-97fc0fa45ff1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.162087 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gzfd\" (UniqueName: \"kubernetes.io/projected/daba0096-67d9-468a-a1fa-97fc0fa45ff1-kube-api-access-6gzfd\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.162131 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/daba0096-67d9-468a-a1fa-97fc0fa45ff1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.163708 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.163761 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.163775 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.163791 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.163802 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.168086 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.178142 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.182289 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.182704 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.182778 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.182798 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.182819 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.182832 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.196626 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.201895 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.201947 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.201960 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.201978 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.201989 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.208032 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.214460 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.218893 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.218978 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.219008 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.219042 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.219065 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.227641 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.237997 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.238142 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.240034 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.240096 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.240110 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.240134 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.240148 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.248782 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.261882 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.263121 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/daba0096-67d9-468a-a1fa-97fc0fa45ff1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.263250 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/daba0096-67d9-468a-a1fa-97fc0fa45ff1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.263352 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gzfd\" (UniqueName: \"kubernetes.io/projected/daba0096-67d9-468a-a1fa-97fc0fa45ff1-kube-api-access-6gzfd\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.263452 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/daba0096-67d9-468a-a1fa-97fc0fa45ff1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.263816 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/daba0096-67d9-468a-a1fa-97fc0fa45ff1-env-overrides\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.264186 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/daba0096-67d9-468a-a1fa-97fc0fa45ff1-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.272185 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/daba0096-67d9-468a-a1fa-97fc0fa45ff1-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.275465 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.277674 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gzfd\" (UniqueName: \"kubernetes.io/projected/daba0096-67d9-468a-a1fa-97fc0fa45ff1-kube-api-access-6gzfd\") pod \"ovnkube-control-plane-749d76644c-hlk7c\" (UID: \"daba0096-67d9-468a-a1fa-97fc0fa45ff1\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.287089 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.298343 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.310322 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.322791 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.334463 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.342207 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.342252 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.342261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.342276 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.342286 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.345531 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.362944 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.373486 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.445012 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.445051 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.445060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.445073 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.445083 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.448318 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.541877 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.548491 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.548532 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.548545 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.548564 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.548578 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.559619 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.574157 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.590994 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.615252 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.630881 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.650797 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.650829 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.650837 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.650851 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.650861 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.652921 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.665358 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.679240 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.689632 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.698026 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.698163 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.700640 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.702439 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" event={"ID":"daba0096-67d9-468a-a1fa-97fc0fa45ff1","Type":"ContainerStarted","Data":"166b4973d5d0131e9a49a339fd95a2c1a79ca37de52039508ffda4deb7a2a59e"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.704856 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/1.log" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.705257 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/0.log" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.707149 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3" exitCode=1 Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.707171 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.707211 4662 scope.go:117] "RemoveContainer" containerID="64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.707732 4662 scope.go:117] "RemoveContainer" containerID="6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3" Dec 08 09:15:08 crc kubenswrapper[4662]: E1208 09:15:08.707864 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.713123 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.725363 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.736586 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.746600 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.753392 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.753518 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.753610 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.753704 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.753814 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.760614 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.772937 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.788788 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.800944 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.811729 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.828144 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:07Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"be9dcc9e-c16a-4962-a6d2-4adeb0b929c4\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.838673 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.855535 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.857240 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.857266 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.857274 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.857458 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.857481 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.880227 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.891271 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.901916 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.912098 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.923968 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.934695 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.948346 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.957353 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.959820 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.959861 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.959874 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.959890 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.959902 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:08Z","lastTransitionTime":"2025-12-08T09:15:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.971864 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:08 crc kubenswrapper[4662]: I1208 09:15:08.984019 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:08Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.062531 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.062582 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.062596 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.062616 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.062631 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.164338 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.164380 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.164392 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.164408 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.164419 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.235721 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hd7m7"] Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.236129 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:09 crc kubenswrapper[4662]: E1208 09:15:09.236182 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.251800 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.264066 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.268065 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.268127 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.268143 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.268171 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.268188 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.276658 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.283249 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.291235 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.306465 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.320502 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.331831 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.346387 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.356999 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.367275 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.374575 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.374623 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.374636 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.374656 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.374673 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.384501 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf8kl\" (UniqueName: \"kubernetes.io/projected/42f18be0-5f4b-4e53-ac80-451fbfc548bf-kube-api-access-bf8kl\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.384573 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.384605 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: E1208 09:15:09.384678 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:09 crc kubenswrapper[4662]: E1208 09:15:09.384758 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:15:09.884725924 +0000 UTC m=+33.453753914 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.398264 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.412327 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.436466 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.448636 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.465236 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:07Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"be9dcc9e-c16a-4962-a6d2-4adeb0b929c4\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.476598 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.476671 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.476683 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.476700 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.476710 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.485230 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf8kl\" (UniqueName: \"kubernetes.io/projected/42f18be0-5f4b-4e53-ac80-451fbfc548bf-kube-api-access-bf8kl\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.485283 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.499913 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf8kl\" (UniqueName: \"kubernetes.io/projected/42f18be0-5f4b-4e53-ac80-451fbfc548bf-kube-api-access-bf8kl\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.579951 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.580044 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.580063 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.580089 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.580107 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.682913 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.682942 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.682951 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.682964 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.682972 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.696816 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:09 crc kubenswrapper[4662]: E1208 09:15:09.696929 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.697185 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:09 crc kubenswrapper[4662]: E1208 09:15:09.697230 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.711236 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" event={"ID":"daba0096-67d9-468a-a1fa-97fc0fa45ff1","Type":"ContainerStarted","Data":"9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.711275 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" event={"ID":"daba0096-67d9-468a-a1fa-97fc0fa45ff1","Type":"ContainerStarted","Data":"49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.713445 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/1.log" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.724267 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.735996 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.748080 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.756164 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.764674 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.775813 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.784551 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.784803 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.784891 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.784987 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.785064 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.787775 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.800069 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.815099 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.828856 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.847831 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:07Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"be9dcc9e-c16a-4962-a6d2-4adeb0b929c4\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.857595 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.873488 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.883944 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.888174 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.888212 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.888222 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.888243 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.888256 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.888684 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:09 crc kubenswrapper[4662]: E1208 09:15:09.888871 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:09 crc kubenswrapper[4662]: E1208 09:15:09.888944 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:15:10.888922898 +0000 UTC m=+34.457950888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.895299 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.905257 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.916539 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:09Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.990247 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.990293 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.990303 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.990319 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:09 crc kubenswrapper[4662]: I1208 09:15:09.990330 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:09Z","lastTransitionTime":"2025-12-08T09:15:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.092220 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.092268 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.092282 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.092300 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.092312 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.194366 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.194409 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.194417 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.194431 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.194441 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.296799 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.296837 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.296849 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.296866 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.296878 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.399384 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.399423 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.399436 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.399452 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.399463 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.501851 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.501897 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.501908 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.501927 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.501953 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.604426 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.604497 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.604506 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.604518 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.604526 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.694813 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.694912 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.694964 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:15:26.69493197 +0000 UTC m=+50.263959990 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.695044 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695048 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.695094 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695135 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:26.695113574 +0000 UTC m=+50.264141564 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.695183 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695223 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695268 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695279 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695334 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:26.695317699 +0000 UTC m=+50.264345689 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695236 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695376 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:26.69536968 +0000 UTC m=+50.264397670 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695393 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695426 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695450 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.695549 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:26.695530704 +0000 UTC m=+50.264558734 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.696911 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.697051 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.697298 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.697634 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.710375 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.710721 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.710937 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.711148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.711304 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.815030 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.815070 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.815078 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.815094 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.815103 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.897720 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.897894 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: E1208 09:15:10.897983 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:15:12.89796757 +0000 UTC m=+36.466995550 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.917301 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.917359 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.917376 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.917400 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:10 crc kubenswrapper[4662]: I1208 09:15:10.917417 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:10Z","lastTransitionTime":"2025-12-08T09:15:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.019669 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.019696 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.019704 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.019716 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.019724 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.123053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.123099 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.123110 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.123126 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.123137 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.226262 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.226490 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.226502 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.226520 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.226533 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.330550 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.330600 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.330616 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.330639 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.330655 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.433202 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.433270 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.433282 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.433302 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.433314 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.535568 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.535603 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.535611 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.535623 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.535632 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.639501 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.639556 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.639580 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.639609 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.639631 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.697364 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.697377 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:11 crc kubenswrapper[4662]: E1208 09:15:11.697693 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:11 crc kubenswrapper[4662]: E1208 09:15:11.697549 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.742415 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.742450 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.742459 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.742474 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.742482 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.844753 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.844790 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.844798 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.844815 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.844824 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.947494 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.947528 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.947539 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.947554 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:11 crc kubenswrapper[4662]: I1208 09:15:11.947566 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:11Z","lastTransitionTime":"2025-12-08T09:15:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.050084 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.050137 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.050147 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.050162 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.050173 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.153098 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.153441 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.153453 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.153470 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.153481 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.255915 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.255959 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.255967 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.255981 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.255990 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.358850 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.358911 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.358927 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.358952 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.358968 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.461534 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.461604 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.461626 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.461658 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.461692 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.563321 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.563370 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.563384 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.563820 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.563865 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.666241 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.666312 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.666333 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.666355 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.666368 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.697121 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.697245 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:12 crc kubenswrapper[4662]: E1208 09:15:12.697388 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:12 crc kubenswrapper[4662]: E1208 09:15:12.697461 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.769823 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.769868 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.769883 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.769905 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.769921 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.872383 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.872436 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.872501 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.872522 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.872535 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.914931 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:12 crc kubenswrapper[4662]: E1208 09:15:12.915130 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:12 crc kubenswrapper[4662]: E1208 09:15:12.915211 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:15:16.915190143 +0000 UTC m=+40.484218153 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.975068 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.975106 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.975114 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.975128 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:12 crc kubenswrapper[4662]: I1208 09:15:12.975138 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:12Z","lastTransitionTime":"2025-12-08T09:15:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.077730 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.077793 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.077805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.077822 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.077834 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.180538 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.180593 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.180606 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.180630 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.180645 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.284377 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.284433 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.284475 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.284493 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.284505 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.387293 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.387355 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.387373 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.387397 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.387420 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.490877 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.490945 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.490965 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.490997 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.491028 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.593016 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.593056 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.593067 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.593083 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.593094 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.695182 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.695281 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.695297 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.695397 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.695410 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.696531 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:13 crc kubenswrapper[4662]: E1208 09:15:13.696629 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.696531 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:13 crc kubenswrapper[4662]: E1208 09:15:13.696713 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.797362 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.797446 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.797468 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.797487 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.797501 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.900239 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.900303 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.900314 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.900331 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:13 crc kubenswrapper[4662]: I1208 09:15:13.900343 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:13Z","lastTransitionTime":"2025-12-08T09:15:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.004078 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.004150 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.004173 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.004203 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.004226 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.107139 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.107203 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.107220 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.107270 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.107287 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.210707 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.210805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.210824 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.210857 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.210872 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.313374 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.313436 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.313452 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.313477 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.313492 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.416505 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.416563 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.416576 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.416594 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.416610 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.519085 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.519149 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.519162 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.519181 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.519194 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.621826 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.621861 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.621871 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.621884 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.621894 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.697338 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.697472 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:14 crc kubenswrapper[4662]: E1208 09:15:14.697626 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:14 crc kubenswrapper[4662]: E1208 09:15:14.697735 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.725220 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.725272 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.725283 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.725301 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.725313 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.828596 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.828653 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.828664 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.828681 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.828692 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.932046 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.932155 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.932171 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.932194 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:14 crc kubenswrapper[4662]: I1208 09:15:14.932211 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:14Z","lastTransitionTime":"2025-12-08T09:15:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.034919 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.034988 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.035005 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.035450 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.035491 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.138388 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.138424 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.138433 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.138448 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.138456 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.243122 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.243181 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.243196 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.243222 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.243247 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.346575 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.346631 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.346644 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.346662 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.346676 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.450599 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.450677 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.450693 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.450717 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.450758 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.553442 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.553486 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.553499 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.553515 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.553527 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.657178 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.657231 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.657244 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.657265 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.657279 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.697544 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.697586 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:15 crc kubenswrapper[4662]: E1208 09:15:15.697694 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:15 crc kubenswrapper[4662]: E1208 09:15:15.697866 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.759643 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.759702 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.759713 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.759729 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.759764 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.866401 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.866466 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.866483 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.866506 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.866524 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.969817 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.969850 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.969862 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.969878 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:15 crc kubenswrapper[4662]: I1208 09:15:15.969891 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:15Z","lastTransitionTime":"2025-12-08T09:15:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.072413 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.072515 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.072532 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.072562 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.072577 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.176533 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.176610 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.176624 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.176667 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.176682 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.280251 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.280309 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.280325 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.280343 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.280359 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.383601 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.383661 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.383672 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.383694 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.383708 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.486015 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.486057 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.486066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.486079 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.486087 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.589443 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.589491 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.589501 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.589518 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.589533 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.693381 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.693449 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.693459 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.693480 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.693493 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.697300 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.697441 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:16 crc kubenswrapper[4662]: E1208 09:15:16.698907 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:16 crc kubenswrapper[4662]: E1208 09:15:16.699158 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.717180 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.743386 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://64b86ffdd408642f42c067276bc312a5743bf04c32a3de17ad3dd854240fbae7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:06Z\\\",\\\"message\\\":\\\" \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-api/machine-api-controllers\\\\\\\"}\\\\nI1208 09:15:05.986064 5833 services_controller.go:360] Finished syncing service machine-api-controllers on namespace openshift-machine-api for network=default : 1.581048ms\\\\nF1208 09:15:05.986063 5833 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:05Z is after 2025-08-24T17:21:41Z]\\\\nI1208 09:15:05.986052 5833 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:960d98b2-dc64-4e93-a4b6-9b19847af71e}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:07Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"be9dcc9e-c16a-4962-a6d2-4adeb0b929c4\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.759491 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.786865 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.795942 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.796049 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.796061 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.796077 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.796088 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.805350 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.821269 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.833548 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.848078 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.870331 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.885404 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.901505 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.902117 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.902196 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.902291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.902452 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:16Z","lastTransitionTime":"2025-12-08T09:15:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.903065 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.915264 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.929117 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.944880 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.954648 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:16 crc kubenswrapper[4662]: E1208 09:15:16.954937 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:16 crc kubenswrapper[4662]: E1208 09:15:16.955002 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:15:24.954983261 +0000 UTC m=+48.524011251 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.959083 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.971200 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:16 crc kubenswrapper[4662]: I1208 09:15:16.985497 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:16Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.007557 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.008039 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.008105 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.008195 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.008280 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.111170 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.111227 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.111240 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.111258 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.111272 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.214146 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.214193 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.214206 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.214223 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.214236 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.318033 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.318105 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.318129 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.318164 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.318188 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.421656 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.422051 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.422115 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.422185 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.422249 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.525064 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.525130 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.525148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.525172 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.525191 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.627162 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.627208 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.627220 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.627236 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.627245 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.697159 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.697215 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:17 crc kubenswrapper[4662]: E1208 09:15:17.697342 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:17 crc kubenswrapper[4662]: E1208 09:15:17.697478 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.729658 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.729727 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.729785 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.729818 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.729844 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.831998 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.832078 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.832103 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.832132 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.832155 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.934346 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.934390 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.934402 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.934419 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:17 crc kubenswrapper[4662]: I1208 09:15:17.934430 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:17Z","lastTransitionTime":"2025-12-08T09:15:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.038387 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.038447 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.038486 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.038507 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.038523 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.142812 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.142873 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.142892 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.142915 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.142932 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.245853 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.246011 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.246077 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.246100 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.246116 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.349089 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.349134 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.349146 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.349164 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.349178 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.452677 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.452774 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.452794 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.452822 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.452841 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.456813 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.456906 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.456926 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.456959 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.456980 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.473030 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:18Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.477528 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.477602 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.477620 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.477645 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.477667 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.490175 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:18Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.494777 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.494859 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.494874 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.494896 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.494925 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.508239 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:18Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.512689 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.512797 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.512815 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.512841 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.512857 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.529210 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:18Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.534592 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.535058 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.535451 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.535894 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.536199 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.560927 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:18Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.561689 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.564853 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.564937 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.564958 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.564990 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.565013 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.667600 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.667642 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.667656 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.667673 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.667686 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.698055 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.698060 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.698389 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:18 crc kubenswrapper[4662]: E1208 09:15:18.698519 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.771015 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.771087 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.771107 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.771136 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.771164 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.874464 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.874541 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.874554 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.874574 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.874587 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.978687 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.978724 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.978731 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.978772 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:18 crc kubenswrapper[4662]: I1208 09:15:18.978785 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:18Z","lastTransitionTime":"2025-12-08T09:15:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.082230 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.082627 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.082765 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.082872 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.082935 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.186196 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.186251 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.186262 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.186284 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.186298 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.289300 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.289371 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.289384 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.289406 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.289419 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.392821 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.392905 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.392921 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.392947 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.392963 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.495894 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.495981 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.496025 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.496048 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.496059 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.598979 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.599011 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.599020 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.599032 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.599041 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.697422 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:19 crc kubenswrapper[4662]: E1208 09:15:19.697690 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.697498 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:19 crc kubenswrapper[4662]: E1208 09:15:19.698145 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.701707 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.701769 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.701790 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.701809 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.701823 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.804869 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.805208 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.805354 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.805429 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.805494 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.916894 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.916976 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.917004 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.917068 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:19 crc kubenswrapper[4662]: I1208 09:15:19.917083 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:19Z","lastTransitionTime":"2025-12-08T09:15:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.023325 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.023355 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.023365 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.023379 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.023389 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.125505 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.125540 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.125548 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.125561 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.125572 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.228336 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.228371 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.228379 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.228392 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.228402 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.331666 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.331721 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.331734 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.331786 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.331801 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.434394 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.434461 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.434484 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.434506 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.434520 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.537133 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.537188 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.537200 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.537248 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.537263 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.640125 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.640178 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.640195 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.640216 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.640232 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.696893 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.696960 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:20 crc kubenswrapper[4662]: E1208 09:15:20.697057 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:20 crc kubenswrapper[4662]: E1208 09:15:20.697321 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.743170 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.743242 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.743261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.743286 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.743302 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.846809 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.846893 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.846908 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.846925 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.846939 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.949509 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.949566 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.949582 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.949600 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:20 crc kubenswrapper[4662]: I1208 09:15:20.949615 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:20Z","lastTransitionTime":"2025-12-08T09:15:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.052208 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.052251 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.052260 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.052275 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.052285 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.155285 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.155330 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.155340 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.155364 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.155380 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.258477 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.258529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.258544 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.258567 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.258585 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.362070 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.362129 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.362145 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.362168 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.362182 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.464982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.465103 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.465127 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.465156 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.465179 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.568407 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.568536 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.568553 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.568579 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.568597 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.671034 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.671105 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.671117 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.671133 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.671144 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.696577 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.696616 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:21 crc kubenswrapper[4662]: E1208 09:15:21.696696 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:21 crc kubenswrapper[4662]: E1208 09:15:21.696888 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.774298 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.774336 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.774344 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.774359 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.774368 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.877561 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.877631 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.877644 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.877662 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.877689 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.965731 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.966855 4662 scope.go:117] "RemoveContainer" containerID="6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.981654 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.981722 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.981761 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.981788 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.981807 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:21Z","lastTransitionTime":"2025-12-08T09:15:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:21 crc kubenswrapper[4662]: I1208 09:15:21.995085 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:21Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.021838 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.047778 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:07Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"be9dcc9e-c16a-4962-a6d2-4adeb0b929c4\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.062298 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.077331 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.086136 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.086205 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.086221 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.086265 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.086281 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.093807 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.111803 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.131310 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.147839 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.173368 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.190495 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.190540 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.190561 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.190589 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.190611 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.191122 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.208586 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.226080 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.243391 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.260401 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.278078 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.294570 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.294623 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.294635 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.294653 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.294667 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.296032 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.397281 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.397331 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.397344 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.397365 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.397375 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.500404 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.500497 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.500518 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.500543 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.500563 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.603790 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.603857 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.603876 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.604091 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.604105 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.697418 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:22 crc kubenswrapper[4662]: E1208 09:15:22.697629 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.698157 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:22 crc kubenswrapper[4662]: E1208 09:15:22.698233 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.706330 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.706365 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.706376 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.706393 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.706406 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.759812 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/1.log" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.763556 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.764105 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.783307 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.796042 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.809261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.809320 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.809331 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.809348 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.809360 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.812376 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.829586 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.843636 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.855279 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.870387 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.884169 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.897518 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.911995 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.912051 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.912064 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.912128 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.912148 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:22Z","lastTransitionTime":"2025-12-08T09:15:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.917381 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.938167 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.953321 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.976327 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:07Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"be9dcc9e-c16a-4962-a6d2-4adeb0b929c4\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:22 crc kubenswrapper[4662]: I1208 09:15:22.996919 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:22Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.009533 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.015783 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.015854 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.015876 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.015904 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.015921 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.032068 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.046388 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.119703 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.120938 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.121334 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.121694 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.122024 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.225827 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.226256 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.226324 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.226436 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.226526 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.329925 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.329986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.330003 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.330027 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.330045 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.433943 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.433979 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.433988 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.434007 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.434023 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.536354 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.536384 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.536391 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.536406 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.536417 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.638630 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.638687 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.638699 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.638720 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.638758 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.696984 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.697060 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:23 crc kubenswrapper[4662]: E1208 09:15:23.697183 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:23 crc kubenswrapper[4662]: E1208 09:15:23.697305 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.742257 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.742361 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.742375 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.742400 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.742414 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.771143 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/2.log" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.771890 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/1.log" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.776138 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f" exitCode=1 Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.776197 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.776272 4662 scope.go:117] "RemoveContainer" containerID="6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.777952 4662 scope.go:117] "RemoveContainer" containerID="b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f" Dec 08 09:15:23 crc kubenswrapper[4662]: E1208 09:15:23.778362 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.790826 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.807595 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.825383 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.841107 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.845235 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.845349 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.845411 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.845475 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.845532 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.858312 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.876812 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.896017 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.911598 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.927305 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.948888 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.948937 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.948949 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.948970 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.948987 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:23Z","lastTransitionTime":"2025-12-08T09:15:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.952017 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.975172 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:23 crc kubenswrapper[4662]: I1208 09:15:23.995380 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6d512c2ff9b054b3bb5296bf4687ddf8a1059d634b23f3e5b753520f7f7437c3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:07Z\\\",\\\"message\\\":\\\"alse, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}, services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"be9dcc9e-c16a-4962-a6d2-4adeb0b929c4\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{\\\\\\\"crc\\\\\\\"}, Routers:[]string{\\\\\\\"GR_crc\\\\\\\"}, Groups:[]string{}}}, built lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-dns/dns-default_TCP_node_router+switch_crc\\\\\\\", UUID:\\\\\\\"\\\\\\\", Protocol:\\\\\\\"TCP\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-dns/dns-default\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:53, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}, services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.4.10\\\\\\\", Port:9154, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Swi\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:06Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:23Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.009984 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.022103 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.035041 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.047804 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.051942 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.051965 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.051974 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.051987 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.051996 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.058211 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.154723 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.154813 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.154828 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.154851 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.154865 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.258358 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.258419 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.258435 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.258457 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.258473 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.361713 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.361806 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.361822 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.361843 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.361887 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.465187 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.465255 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.465273 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.465296 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.465314 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.568412 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.568512 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.568529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.568560 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.568578 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.671552 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.671980 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.672125 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.672259 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.672419 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.696896 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:24 crc kubenswrapper[4662]: E1208 09:15:24.697034 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.697255 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:24 crc kubenswrapper[4662]: E1208 09:15:24.697451 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.775409 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.775455 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.775464 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.775483 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.775495 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.781938 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/2.log" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.785829 4662 scope.go:117] "RemoveContainer" containerID="b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f" Dec 08 09:15:24 crc kubenswrapper[4662]: E1208 09:15:24.786004 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.805465 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.820697 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.835143 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.861519 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.878332 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.878570 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.878654 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.878799 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.878891 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.879347 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.901621 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.918613 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.941244 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.956420 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.971777 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.981605 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.981671 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.981691 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.981715 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.981734 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:24Z","lastTransitionTime":"2025-12-08T09:15:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:24 crc kubenswrapper[4662]: I1208 09:15:24.985908 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:24Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.007716 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:25Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.023011 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:25Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.033366 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:25Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.047408 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:25Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.049976 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:25 crc kubenswrapper[4662]: E1208 09:15:25.050216 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:25 crc kubenswrapper[4662]: E1208 09:15:25.050353 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:15:41.050334086 +0000 UTC m=+64.619362076 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.059256 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:25Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.071954 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:25Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.083921 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.084164 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.084253 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.084384 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.084521 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.187224 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.187284 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.187301 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.187323 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.187340 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.289947 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.290397 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.290649 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.290846 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.290977 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.393688 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.393797 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.393822 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.393852 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.393870 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.497003 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.497042 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.497053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.497069 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.497080 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.599327 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.599591 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.599680 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.599779 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.599890 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.696922 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.696941 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:25 crc kubenswrapper[4662]: E1208 09:15:25.697265 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:25 crc kubenswrapper[4662]: E1208 09:15:25.697120 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.703339 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.703608 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.703691 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.704127 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.704222 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.807205 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.807249 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.807264 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.807285 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.807302 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.910025 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.910101 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.910126 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.910156 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:25 crc kubenswrapper[4662]: I1208 09:15:25.910179 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:25Z","lastTransitionTime":"2025-12-08T09:15:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.015873 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.016239 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.016486 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.016676 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.016905 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.119679 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.120105 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.120272 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.120474 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.120647 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.223306 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.223825 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.223902 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.224063 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.224121 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.326653 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.326713 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.326738 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.326813 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.326834 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.429065 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.429099 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.429109 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.429124 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.429135 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.532626 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.533040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.533240 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.533315 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.533373 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.636017 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.636280 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.636395 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.636525 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.636607 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.696656 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.697115 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.696920 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.697368 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.708342 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.721704 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.732219 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.739157 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.739191 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.739200 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.739214 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.739223 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.744868 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.758859 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.770124 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.772916 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.773038 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.773062 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773166 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773226 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773259 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773274 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773353 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:15:58.773081178 +0000 UTC m=+82.342109198 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.773383 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773428 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773434 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:58.773404296 +0000 UTC m=+82.342432296 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773467 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:58.773458627 +0000 UTC m=+82.342486627 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773481 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:58.773475128 +0000 UTC m=+82.342503128 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.773496 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773572 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773582 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773592 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:26 crc kubenswrapper[4662]: E1208 09:15:26.773618 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:15:58.773611731 +0000 UTC m=+82.342639731 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.780695 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.797382 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.805515 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.813344 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.824236 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.834094 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.841497 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.841557 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.841573 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.841597 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.841614 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.848680 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.871402 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.884040 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.901900 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.911975 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:26Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.943763 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.943804 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.943832 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.944044 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:26 crc kubenswrapper[4662]: I1208 09:15:26.944052 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:26Z","lastTransitionTime":"2025-12-08T09:15:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.047058 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.047101 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.047111 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.047126 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.047137 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.148676 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.148717 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.148727 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.148769 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.148794 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.250697 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.250804 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.250824 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.250850 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.250874 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.352963 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.353043 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.353065 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.353094 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.353116 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.395028 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.404963 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.413066 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.428803 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.439869 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.455450 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.455481 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.455489 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.455503 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.455512 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.455564 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.467168 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.489595 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.506287 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.520825 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.538260 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.550264 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.557459 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.557496 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.557529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.557548 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.557560 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.563542 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.572926 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.593552 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.604024 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.613790 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.626117 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.638156 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:27Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.661026 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.661098 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.661114 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.661132 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.661149 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.696509 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:27 crc kubenswrapper[4662]: E1208 09:15:27.696638 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.696897 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:27 crc kubenswrapper[4662]: E1208 09:15:27.697158 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.763123 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.763329 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.763418 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.763542 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.763640 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.866456 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.866525 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.866542 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.866566 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.866589 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.969247 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.969311 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.969355 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.969381 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:27 crc kubenswrapper[4662]: I1208 09:15:27.969403 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:27Z","lastTransitionTime":"2025-12-08T09:15:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.071107 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.071145 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.071155 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.071170 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.071179 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.173005 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.173079 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.173092 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.173108 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.173118 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.275268 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.275318 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.275328 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.275341 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.275350 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.378779 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.378831 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.378843 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.378864 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.378878 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.481172 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.481223 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.481234 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.481251 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.481263 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.583377 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.583410 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.583418 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.583430 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.583438 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.686045 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.686097 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.686108 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.686128 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.686143 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.698532 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.698627 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.698784 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.699008 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.788959 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.788990 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.788998 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.789010 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.789019 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.890172 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.890221 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.890229 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.890243 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.890252 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.903972 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:28Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.906784 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.906816 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.906828 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.906844 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.906855 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.918646 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:28Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.923130 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.923208 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.923227 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.923251 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.923269 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.941059 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:28Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.945641 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.945678 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.945690 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.945707 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.945722 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.956724 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:28Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.960570 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.960610 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.960620 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.960635 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.960645 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.977421 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:28Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:28 crc kubenswrapper[4662]: E1208 09:15:28.977644 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.979169 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.979190 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.979201 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.979214 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:28 crc kubenswrapper[4662]: I1208 09:15:28.979242 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:28Z","lastTransitionTime":"2025-12-08T09:15:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.081561 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.081602 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.081613 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.081633 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.081649 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.185471 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.185540 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.185582 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.185616 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.185638 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.289110 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.289163 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.289178 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.289201 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.289218 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.392199 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.392289 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.392307 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.392330 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.392346 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.496131 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.496204 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.496259 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.496289 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.496310 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.599031 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.599080 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.599093 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.599108 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.599118 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.697489 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:29 crc kubenswrapper[4662]: E1208 09:15:29.697632 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.697515 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:29 crc kubenswrapper[4662]: E1208 09:15:29.697948 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.701490 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.701515 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.701524 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.701536 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.701545 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.803493 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.803794 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.803902 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.803978 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.804037 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.907585 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.907620 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.907628 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.907641 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:29 crc kubenswrapper[4662]: I1208 09:15:29.907649 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:29Z","lastTransitionTime":"2025-12-08T09:15:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.010964 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.011307 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.011439 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.011574 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.011716 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.114363 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.114433 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.114450 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.114474 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.114494 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.217199 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.217250 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.217266 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.217291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.217309 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.319945 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.320511 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.320580 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.320646 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.320771 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.423503 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.423565 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.423585 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.423609 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.423627 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.527905 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.527952 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.527966 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.527985 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.527999 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.631041 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.631103 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.631127 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.631157 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.631182 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.697232 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.697263 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:30 crc kubenswrapper[4662]: E1208 09:15:30.697398 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:30 crc kubenswrapper[4662]: E1208 09:15:30.697500 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.733563 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.733631 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.733651 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.733702 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.733720 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.836909 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.836960 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.836975 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.836994 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.837007 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.940528 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.940628 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.940655 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.940685 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:30 crc kubenswrapper[4662]: I1208 09:15:30.940708 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:30Z","lastTransitionTime":"2025-12-08T09:15:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.044287 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.044364 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.044386 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.044413 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.044430 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.147604 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.147639 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.147650 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.147666 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.147679 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.251132 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.251199 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.251227 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.251257 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.251272 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.354845 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.354890 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.354903 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.354924 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.354938 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.458108 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.458193 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.458211 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.458250 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.458319 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.561982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.562035 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.562045 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.562064 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.562075 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.666414 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.666459 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.666469 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.666484 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.666503 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.696638 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.696659 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:31 crc kubenswrapper[4662]: E1208 09:15:31.696907 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:31 crc kubenswrapper[4662]: E1208 09:15:31.697020 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.770596 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.770665 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.770684 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.770711 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.770729 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.873683 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.873733 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.873758 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.873777 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.873789 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.976840 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.976885 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.976897 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.976915 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:31 crc kubenswrapper[4662]: I1208 09:15:31.976930 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:31Z","lastTransitionTime":"2025-12-08T09:15:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.080931 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.080986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.081001 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.081021 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.081035 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.184347 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.184429 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.184488 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.184522 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.184548 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.287223 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.287283 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.287301 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.287329 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.287443 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.391220 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.391286 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.391304 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.391341 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.391360 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.494566 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.494608 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.494616 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.494631 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.494640 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.596608 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.596640 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.596649 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.596664 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.596675 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.697001 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.697032 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:32 crc kubenswrapper[4662]: E1208 09:15:32.697334 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:32 crc kubenswrapper[4662]: E1208 09:15:32.697453 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.698961 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.698998 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.699025 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.699042 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.699054 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.801319 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.801370 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.801382 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.801401 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.801412 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.904209 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.904296 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.904313 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.904337 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:32 crc kubenswrapper[4662]: I1208 09:15:32.904353 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:32Z","lastTransitionTime":"2025-12-08T09:15:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.006328 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.006391 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.006403 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.006420 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.006431 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.109223 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.109342 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.109421 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.109462 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.109525 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.212274 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.212325 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.212341 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.212365 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.212384 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.315275 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.315341 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.315351 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.315366 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.315379 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.418275 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.418317 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.418329 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.418346 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.418359 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.520946 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.520984 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.520994 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.521010 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.521022 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.623165 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.623204 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.623214 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.623231 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.623242 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.696375 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.696419 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:33 crc kubenswrapper[4662]: E1208 09:15:33.696507 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:33 crc kubenswrapper[4662]: E1208 09:15:33.696573 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.725662 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.725731 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.725788 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.725812 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.725829 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.828885 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.828946 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.828967 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.828995 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.829016 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.932926 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.933055 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.933075 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.933098 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:33 crc kubenswrapper[4662]: I1208 09:15:33.933116 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:33Z","lastTransitionTime":"2025-12-08T09:15:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.040602 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.040673 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.040977 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.041015 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.041033 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.144421 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.144509 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.144520 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.144537 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.144548 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.247028 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.247343 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.247503 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.247639 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.247836 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.351532 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.351608 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.351651 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.351685 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.351709 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.454251 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.454329 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.454354 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.454385 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.454413 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.557107 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.557145 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.557154 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.557167 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.557176 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.660060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.660109 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.660126 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.660152 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.660169 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.697996 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:34 crc kubenswrapper[4662]: E1208 09:15:34.698395 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.698084 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:34 crc kubenswrapper[4662]: E1208 09:15:34.699493 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.762952 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.763003 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.763027 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.763044 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.763057 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.866232 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.866500 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.866580 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.866673 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.866732 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.969142 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.969387 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.969471 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.969563 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:34 crc kubenswrapper[4662]: I1208 09:15:34.969645 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:34Z","lastTransitionTime":"2025-12-08T09:15:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.072307 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.072372 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.072393 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.072422 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.072481 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.174995 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.175342 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.175426 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.175519 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.175610 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.278622 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.278664 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.278671 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.278686 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.278695 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.381022 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.381049 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.381056 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.381068 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.381077 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.483139 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.483707 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.483865 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.483997 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.484109 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.587321 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.587363 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.587372 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.587387 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.587397 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.690103 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.690441 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.690506 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.690571 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.690636 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.697536 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:35 crc kubenswrapper[4662]: E1208 09:15:35.697682 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.697911 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:35 crc kubenswrapper[4662]: E1208 09:15:35.697975 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.793102 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.793487 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.793558 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.793628 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.793701 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.897269 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.897687 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.897805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.897907 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:35 crc kubenswrapper[4662]: I1208 09:15:35.897980 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:35Z","lastTransitionTime":"2025-12-08T09:15:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.001442 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.001917 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.002011 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.002101 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.002169 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.105348 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.105836 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.105936 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.106255 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.106365 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.209772 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.210105 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.210261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.210486 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.210688 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.313807 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.314294 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.314385 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.314459 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.314529 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.417857 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.417912 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.417926 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.417950 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.417967 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.521056 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.521123 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.521136 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.521160 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.521177 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.624204 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.624274 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.624289 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.624311 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.624325 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.696693 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:36 crc kubenswrapper[4662]: E1208 09:15:36.697536 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.697628 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:36 crc kubenswrapper[4662]: E1208 09:15:36.697866 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.715358 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.728109 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.728149 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.728163 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.728184 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.728201 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.734834 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.751249 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.764818 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.785512 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.801839 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.815506 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.832214 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.832259 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.832270 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.832189 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.832287 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.832411 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.843266 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.854089 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.867014 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.881432 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.896056 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.908263 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.927113 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.935120 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.935633 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.935719 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.935814 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.935877 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:36Z","lastTransitionTime":"2025-12-08T09:15:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.942558 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.960271 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:36 crc kubenswrapper[4662]: I1208 09:15:36.976240 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:36Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.039295 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.039364 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.039381 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.039402 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.039418 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.143431 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.144129 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.144242 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.144344 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.144475 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.247358 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.247440 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.247459 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.247489 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.247510 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.349690 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.349729 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.349737 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.349766 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.349774 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.452693 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.452775 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.452790 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.452810 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.452823 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.556205 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.556257 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.556270 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.556290 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.556303 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.659617 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.659678 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.659698 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.659724 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.659772 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.696965 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:37 crc kubenswrapper[4662]: E1208 09:15:37.697109 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.697174 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:37 crc kubenswrapper[4662]: E1208 09:15:37.697230 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.698011 4662 scope.go:117] "RemoveContainer" containerID="b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f" Dec 08 09:15:37 crc kubenswrapper[4662]: E1208 09:15:37.698264 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.763126 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.763171 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.763182 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.763199 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.763209 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.867006 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.867082 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.867102 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.867141 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.867162 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.970257 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.970324 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.970349 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.970375 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:37 crc kubenswrapper[4662]: I1208 09:15:37.970393 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:37Z","lastTransitionTime":"2025-12-08T09:15:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.072892 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.072936 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.072946 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.072979 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.072990 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.176360 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.176510 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.176529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.176584 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.176601 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.279540 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.279594 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.279609 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.279629 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.279641 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.382632 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.382795 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.382815 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.383269 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.383344 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.487226 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.487290 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.487306 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.487326 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.487339 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.604997 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.605040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.605056 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.605078 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.605089 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.696873 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:38 crc kubenswrapper[4662]: E1208 09:15:38.697065 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.696897 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:38 crc kubenswrapper[4662]: E1208 09:15:38.697343 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.707948 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.708050 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.708068 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.708093 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.708107 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.812291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.812339 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.812351 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.812373 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.812386 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.916005 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.916046 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.916060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.916080 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:38 crc kubenswrapper[4662]: I1208 09:15:38.916091 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:38Z","lastTransitionTime":"2025-12-08T09:15:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.018842 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.018890 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.018901 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.018921 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.018941 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.121480 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.121536 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.121553 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.121582 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.121601 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.225079 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.225140 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.225163 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.225191 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.225213 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.330548 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.330634 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.330655 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.330724 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.330839 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.373198 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.373272 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.373283 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.373307 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.373318 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.391479 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:39Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.396885 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.396963 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.396982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.397015 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.397033 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.417873 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:39Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.423929 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.423962 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.423971 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.423989 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.423999 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.438852 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:39Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.443689 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.443878 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.443907 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.443942 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.443962 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.463389 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:39Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.470443 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.470493 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.470509 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.470538 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.470556 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.486928 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:39Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:39Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.487145 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.490146 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.490219 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.490239 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.490265 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.490280 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.594479 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.594541 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.594556 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.594579 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.594593 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.696787 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.696937 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.697268 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.697280 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.697327 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.697344 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.697370 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.697389 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: E1208 09:15:39.697339 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.801088 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.801142 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.801183 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.801208 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.801224 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.904516 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.904570 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.904586 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.904606 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:39 crc kubenswrapper[4662]: I1208 09:15:39.904619 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:39Z","lastTransitionTime":"2025-12-08T09:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.008166 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.008211 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.008220 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.008237 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.008249 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.111291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.111696 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.111803 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.111979 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.112064 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.223627 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.224302 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.224408 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.224542 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.224837 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.327912 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.327972 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.327981 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.328003 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.328017 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.430801 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.430834 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.430844 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.430860 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.430872 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.533371 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.533408 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.533418 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.533439 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.533450 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.636902 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.636963 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.636972 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.636989 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.637000 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.696878 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:40 crc kubenswrapper[4662]: E1208 09:15:40.699075 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.698931 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:40 crc kubenswrapper[4662]: E1208 09:15:40.699549 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.738913 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.738970 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.738981 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.738995 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.739006 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.841696 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.841760 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.841774 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.841790 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.841800 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.944974 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.945018 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.945035 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.945057 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:40 crc kubenswrapper[4662]: I1208 09:15:40.945074 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:40Z","lastTransitionTime":"2025-12-08T09:15:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.047278 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.047337 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.047349 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.047371 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.047387 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.141314 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:41 crc kubenswrapper[4662]: E1208 09:15:41.141509 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:41 crc kubenswrapper[4662]: E1208 09:15:41.141581 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:16:13.141562099 +0000 UTC m=+96.710590089 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.149979 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.150026 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.150040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.150060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.150071 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.252714 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.252778 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.252789 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.252804 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.252819 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.354944 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.355008 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.355027 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.355053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.355069 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.457784 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.457831 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.457841 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.457860 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.457871 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.559823 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.559863 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.559871 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.559902 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.559913 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.662653 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.662698 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.662706 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.662723 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.662733 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.697059 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.697120 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:41 crc kubenswrapper[4662]: E1208 09:15:41.697263 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:41 crc kubenswrapper[4662]: E1208 09:15:41.697355 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.767504 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.767562 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.767574 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.767594 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.767606 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.869984 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.870040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.870057 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.870079 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.870096 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.972573 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.972655 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.972674 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.972696 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:41 crc kubenswrapper[4662]: I1208 09:15:41.972718 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:41Z","lastTransitionTime":"2025-12-08T09:15:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.074848 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.074889 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.074899 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.074918 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.074928 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.176805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.176875 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.176888 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.176904 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.176938 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.278644 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.278692 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.278703 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.278722 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.278758 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.381090 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.381148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.381159 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.381177 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.381189 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.484310 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.484368 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.484379 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.484395 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.484407 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.587508 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.587550 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.587565 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.587586 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.587601 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.690476 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.690529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.690546 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.690570 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.690588 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.697100 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.697139 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:42 crc kubenswrapper[4662]: E1208 09:15:42.697304 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:42 crc kubenswrapper[4662]: E1208 09:15:42.697392 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.793217 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.793261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.793270 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.793283 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.793295 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.896326 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.896381 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.896393 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.896413 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:42 crc kubenswrapper[4662]: I1208 09:15:42.896424 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:42Z","lastTransitionTime":"2025-12-08T09:15:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.000400 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.000469 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.000482 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.000499 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.000511 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.102450 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.102494 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.102509 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.102567 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.102586 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.204385 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.204420 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.204462 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.204476 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.204485 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.306684 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.306726 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.306753 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.306769 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.306781 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.409136 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.409183 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.409195 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.409214 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.409225 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.511352 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.511386 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.511394 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.511406 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.511416 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.614037 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.614076 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.614090 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.614108 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.614120 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.697404 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:43 crc kubenswrapper[4662]: E1208 09:15:43.697524 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.697404 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:43 crc kubenswrapper[4662]: E1208 09:15:43.697634 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.715949 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.715996 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.716006 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.716017 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.716027 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.818857 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.818889 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.818902 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.818919 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.818930 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.844812 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/0.log" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.844849 4662 generic.go:334] "Generic (PLEG): container finished" podID="adeadc12-d6e2-4168-a1c0-de79d16c8de9" containerID="991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f" exitCode=1 Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.844877 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerDied","Data":"991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.845193 4662 scope.go:117] "RemoveContainer" containerID="991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.858839 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.870952 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.883153 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.897008 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.914097 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.921924 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.921961 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.921969 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.922203 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.922241 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:43Z","lastTransitionTime":"2025-12-08T09:15:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.924259 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.934396 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.945809 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.958525 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.968968 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.982863 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:43Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:42Z\\\",\\\"message\\\":\\\"2025-12-08T09:14:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153\\\\n2025-12-08T09:14:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153 to /host/opt/cni/bin/\\\\n2025-12-08T09:14:57Z [verbose] multus-daemon started\\\\n2025-12-08T09:14:57Z [verbose] Readiness Indicator file check\\\\n2025-12-08T09:15:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:43 crc kubenswrapper[4662]: I1208 09:15:43.994843 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:43Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.010103 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.022172 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.024635 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.024783 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.024888 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.024983 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.025072 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.033275 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.052547 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.065563 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.083838 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.130890 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.130927 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.130936 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.130948 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.130958 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.232705 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.232767 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.232782 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.232798 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.232810 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.335000 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.335028 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.335036 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.335049 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.335057 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.438122 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.438163 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.438175 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.438191 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.438203 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.540678 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.540795 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.540817 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.540842 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.540859 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.643418 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.643453 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.643465 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.643479 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.643493 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.697395 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.697415 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:44 crc kubenswrapper[4662]: E1208 09:15:44.698155 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:44 crc kubenswrapper[4662]: E1208 09:15:44.698327 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.746116 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.746176 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.746200 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.746230 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.746252 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.848008 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.848043 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.848053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.848070 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.848080 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.849215 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/0.log" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.849265 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerStarted","Data":"1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.863584 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.876986 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.887596 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.902110 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.912378 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.923534 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.936019 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.950330 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.950606 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.950772 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.951060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.951247 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:44Z","lastTransitionTime":"2025-12-08T09:15:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.953212 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.967021 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:44 crc kubenswrapper[4662]: I1208 09:15:44.979827 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:42Z\\\",\\\"message\\\":\\\"2025-12-08T09:14:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153\\\\n2025-12-08T09:14:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153 to /host/opt/cni/bin/\\\\n2025-12-08T09:14:57Z [verbose] multus-daemon started\\\\n2025-12-08T09:14:57Z [verbose] Readiness Indicator file check\\\\n2025-12-08T09:15:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.000618 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:44Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.015291 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:45Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.035038 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:45Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.046996 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:45Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.054218 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.054259 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.054270 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.054285 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.054297 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.059177 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:45Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.069641 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:45Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.079870 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:45Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.089541 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:45Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.156344 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.156378 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.156385 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.156398 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.156408 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.258159 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.258645 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.258720 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.258812 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.258881 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.361312 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.361640 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.361777 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.361884 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.361973 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.465124 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.465171 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.465184 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.465201 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.465214 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.567387 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.567428 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.567440 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.567458 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.567470 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.669937 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.669973 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.669985 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.670000 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.670012 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.697054 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:45 crc kubenswrapper[4662]: E1208 09:15:45.697180 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.697236 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:45 crc kubenswrapper[4662]: E1208 09:15:45.697272 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.773155 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.773241 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.773254 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.773301 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.773315 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.875191 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.875220 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.875229 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.875241 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.875250 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.977321 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.977379 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.977388 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.977401 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:45 crc kubenswrapper[4662]: I1208 09:15:45.977411 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:45Z","lastTransitionTime":"2025-12-08T09:15:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.079407 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.079438 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.079448 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.079460 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.079469 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.181985 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.182017 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.182028 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.182043 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.182053 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.284972 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.285006 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.285040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.285060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.285072 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.387312 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.387367 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.387381 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.387400 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.387413 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.490239 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.490279 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.490288 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.490304 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.490314 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.592105 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.592141 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.592154 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.592170 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.592181 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.694762 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.694810 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.694820 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.694838 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.694852 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.697016 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.697049 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:46 crc kubenswrapper[4662]: E1208 09:15:46.697235 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:46 crc kubenswrapper[4662]: E1208 09:15:46.697141 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.710441 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.722897 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.733948 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.745411 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.760391 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.773125 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.786247 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.796873 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.796910 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.796919 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.796931 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.796941 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.801091 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.811462 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.821064 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.836538 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.851337 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.869765 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.883698 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:42Z\\\",\\\"message\\\":\\\"2025-12-08T09:14:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153\\\\n2025-12-08T09:14:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153 to /host/opt/cni/bin/\\\\n2025-12-08T09:14:57Z [verbose] multus-daemon started\\\\n2025-12-08T09:14:57Z [verbose] Readiness Indicator file check\\\\n2025-12-08T09:15:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.898954 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.898986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.898995 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.899010 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.899019 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:46Z","lastTransitionTime":"2025-12-08T09:15:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.904514 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.918892 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.940895 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:46 crc kubenswrapper[4662]: I1208 09:15:46.952492 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:46Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.002450 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.002499 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.002509 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.002527 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.002542 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.105805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.105855 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.105867 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.105917 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.105927 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.208342 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.208372 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.208380 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.208396 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.208421 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.310430 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.310471 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.310483 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.310499 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.310510 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.412294 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.412328 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.412338 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.412355 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.412364 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.514503 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.514550 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.514563 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.514582 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.514595 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.616355 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.616405 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.616421 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.616480 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.616498 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.697079 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.697102 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:47 crc kubenswrapper[4662]: E1208 09:15:47.697190 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:47 crc kubenswrapper[4662]: E1208 09:15:47.697351 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.718137 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.718164 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.718173 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.718186 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.718195 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.820040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.820078 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.820090 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.820105 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.820117 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.922754 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.922777 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.922784 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.922798 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:47 crc kubenswrapper[4662]: I1208 09:15:47.922807 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:47Z","lastTransitionTime":"2025-12-08T09:15:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.024822 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.024868 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.024877 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.024891 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.024901 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.127197 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.127250 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.127264 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.127283 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.127297 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.229084 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.229123 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.229133 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.229149 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.229160 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.331385 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.331421 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.331431 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.331445 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.331456 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.433968 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.434016 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.434049 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.434073 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.434091 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.536558 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.536682 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.536697 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.536715 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.536730 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.639772 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.639829 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.639842 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.639863 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.639875 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.697187 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.697281 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:48 crc kubenswrapper[4662]: E1208 09:15:48.697397 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:48 crc kubenswrapper[4662]: E1208 09:15:48.697569 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.741978 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.742029 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.742040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.742059 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.742072 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.843976 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.844037 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.844054 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.844080 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.844097 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.945847 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.945898 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.945923 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.945935 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:48 crc kubenswrapper[4662]: I1208 09:15:48.945945 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:48Z","lastTransitionTime":"2025-12-08T09:15:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.048214 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.048249 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.048297 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.048317 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.048328 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.151439 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.151480 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.151489 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.151508 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.151521 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.254793 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.254863 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.254883 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.254910 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.254933 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.358133 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.358189 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.358200 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.358217 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.358229 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.461952 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.462010 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.462031 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.462053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.462066 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.564222 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.564257 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.564268 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.564285 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.564297 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.667641 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.667873 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.667940 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.668005 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.668075 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.697039 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.697231 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.697058 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.697868 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.698416 4662 scope.go:117] "RemoveContainer" containerID="b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.770948 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.770982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.770992 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.771007 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.771018 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.848472 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.848510 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.848519 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.848532 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.848542 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.861393 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:49Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.865082 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.865114 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.865124 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.865138 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.865149 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.875246 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:49Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.879170 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.879186 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.879194 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.879206 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.879214 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.892274 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:49Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.897120 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.897169 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.897182 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.897203 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.897218 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.910821 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:49Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.914909 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.914947 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.914964 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.914989 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.915005 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.926538 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:49Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:49 crc kubenswrapper[4662]: E1208 09:15:49.926782 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.928756 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.928810 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.928828 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.928899 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:49 crc kubenswrapper[4662]: I1208 09:15:49.928934 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:49Z","lastTransitionTime":"2025-12-08T09:15:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.032505 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.032541 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.032552 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.032571 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.032585 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.135588 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.135616 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.135624 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.135637 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.135648 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.237890 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.237940 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.237952 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.237969 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.237980 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.340031 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.340066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.340074 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.340087 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.340097 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.441874 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.441908 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.441918 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.441931 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.441941 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.545138 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.545207 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.545223 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.545249 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.545263 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.647618 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.647652 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.647661 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.647674 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.647683 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.697279 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:50 crc kubenswrapper[4662]: E1208 09:15:50.697446 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.697290 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:50 crc kubenswrapper[4662]: E1208 09:15:50.697554 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.750459 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.750512 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.750523 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.750545 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.750557 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.852769 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.853426 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.853521 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.853601 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.853674 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.878776 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/2.log" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.882265 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.882838 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.917697 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:50Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.935995 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:50Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.956762 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.956799 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.956810 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.956827 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.956840 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:50Z","lastTransitionTime":"2025-12-08T09:15:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:50 crc kubenswrapper[4662]: I1208 09:15:50.975328 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:50Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.002960 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.025199 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.038530 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.055852 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.059861 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.059903 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.059915 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.059933 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.059946 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.072587 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.084378 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.102040 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.115614 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.129302 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.142270 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.154600 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.161976 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.162032 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.162047 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.162070 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.162084 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.169855 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.184859 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:42Z\\\",\\\"message\\\":\\\"2025-12-08T09:14:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153\\\\n2025-12-08T09:14:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153 to /host/opt/cni/bin/\\\\n2025-12-08T09:14:57Z [verbose] multus-daemon started\\\\n2025-12-08T09:14:57Z [verbose] Readiness Indicator file check\\\\n2025-12-08T09:15:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.200580 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.217974 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.265758 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.265804 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.265817 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.265836 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.265852 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.369291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.369361 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.369377 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.369403 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.369421 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.472630 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.472688 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.472703 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.472722 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.472735 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.575920 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.575988 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.576009 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.576038 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.576061 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.680111 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.680156 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.680211 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.680232 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.680246 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.696624 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:51 crc kubenswrapper[4662]: E1208 09:15:51.696775 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.696636 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:51 crc kubenswrapper[4662]: E1208 09:15:51.697118 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.783924 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.783974 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.783986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.784004 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.784019 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.887951 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.887998 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.888009 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.888058 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.888071 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.889554 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/3.log" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.890385 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/2.log" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.894354 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" exitCode=1 Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.894402 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd"} Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.894440 4662 scope.go:117] "RemoveContainer" containerID="b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.895541 4662 scope.go:117] "RemoveContainer" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" Dec 08 09:15:51 crc kubenswrapper[4662]: E1208 09:15:51.895771 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.911498 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.933550 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.947417 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.959809 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.974107 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.990243 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:51Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.991681 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.991712 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.991724 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.991760 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:51 crc kubenswrapper[4662]: I1208 09:15:51.991774 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:51Z","lastTransitionTime":"2025-12-08T09:15:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.007162 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.019383 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:42Z\\\",\\\"message\\\":\\\"2025-12-08T09:14:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153\\\\n2025-12-08T09:14:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153 to /host/opt/cni/bin/\\\\n2025-12-08T09:14:57Z [verbose] multus-daemon started\\\\n2025-12-08T09:14:57Z [verbose] Readiness Indicator file check\\\\n2025-12-08T09:15:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.030335 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.040925 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.057326 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b412baaa5968c6f18ceb5ab03f884b01c9130131e589ec03fd42d57f650a8e1f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:23Z\\\",\\\"message\\\":\\\"54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813447 6197 model_client.go:398] Mutate operations generated as: [{Op:mutate Table:Port_Group Row:map[] Rows:[] Columns:[] Mutations:[{Column:ports Mutator:insert Value:{GoSet:[{GoUUID:61897e97-c771-4738-8709-09636387cb00}]}}] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {c02bd945-d57b-49ff-9cd3-202ed3574b26}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1208 09:15:22.813444 6197 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-machine-api/cluster-autoscaler-operator]} name:Service_openshift-machine-api/cluster-autoscaler-operator_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.245:443: 10.217.5.245:9192:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {54fbe873-7e6d-475f-a0ad-8dd5f06d850d}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF1208 09:15:22.813025 6197 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:22Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:50Z\\\",\\\"message\\\":\\\"flector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1208 09:15:50.641929 6538 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1208 09:15:50.642070 6538 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1208 09:15:50.642185 6538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1208 09:15:50.642241 6538 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1208 09:15:50.642258 6538 factory.go:656] Stopping watch factory\\\\nI1208 09:15:50.642271 6538 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1208 09:15:50.642303 6538 handler.go:208] Removed *v1.Node event handler 2\\\\nI1208 09:15:50.650984 6538 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1208 09:15:50.651010 6538 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1208 09:15:50.651060 6538 ovnkube.go:599] Stopped ovnkube\\\\nI1208 09:15:50.651093 6538 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1208 09:15:50.651263 6538 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.067874 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.085936 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.093942 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.093999 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.094008 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.094023 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.094033 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.104727 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.118101 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.130248 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.142567 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.153625 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.195809 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.195856 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.195867 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.195882 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.195891 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.297926 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.297994 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.298003 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.298015 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.298024 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.400493 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.400532 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.400543 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.400558 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.400571 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.502616 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.502659 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.502670 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.502684 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.502695 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.605222 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.605277 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.605286 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.605301 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.605331 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.696810 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:52 crc kubenswrapper[4662]: E1208 09:15:52.696943 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.697141 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:52 crc kubenswrapper[4662]: E1208 09:15:52.697315 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.707291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.707324 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.707332 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.707349 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.707359 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.810168 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.810451 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.810543 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.810626 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.810711 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.920155 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.920916 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.921026 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.921125 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.921200 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:52Z","lastTransitionTime":"2025-12-08T09:15:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.922344 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/3.log" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.925993 4662 scope.go:117] "RemoveContainer" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" Dec 08 09:15:52 crc kubenswrapper[4662]: E1208 09:15:52.926280 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.942172 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.955564 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.970431 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:52 crc kubenswrapper[4662]: I1208 09:15:52.987542 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:42Z\\\",\\\"message\\\":\\\"2025-12-08T09:14:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153\\\\n2025-12-08T09:14:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153 to /host/opt/cni/bin/\\\\n2025-12-08T09:14:57Z [verbose] multus-daemon started\\\\n2025-12-08T09:14:57Z [verbose] Readiness Indicator file check\\\\n2025-12-08T09:15:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:52Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.010313 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.023886 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.023918 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.023929 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.023946 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.023957 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.027699 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.054010 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:50Z\\\",\\\"message\\\":\\\"flector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1208 09:15:50.641929 6538 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1208 09:15:50.642070 6538 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1208 09:15:50.642185 6538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1208 09:15:50.642241 6538 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1208 09:15:50.642258 6538 factory.go:656] Stopping watch factory\\\\nI1208 09:15:50.642271 6538 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1208 09:15:50.642303 6538 handler.go:208] Removed *v1.Node event handler 2\\\\nI1208 09:15:50.650984 6538 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1208 09:15:50.651010 6538 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1208 09:15:50.651060 6538 ovnkube.go:599] Stopped ovnkube\\\\nI1208 09:15:50.651093 6538 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1208 09:15:50.651263 6538 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.066932 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.080868 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.096907 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.110081 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.120104 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.126406 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.126428 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.126437 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.126472 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.126482 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.138883 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.150577 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.161791 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.176231 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.184952 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.192809 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:53Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.228223 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.228376 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.228503 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.228578 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.228636 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.331334 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.331397 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.331417 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.331446 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.331470 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.434056 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.434089 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.434097 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.434110 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.434118 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.536959 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.537015 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.537037 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.537066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.537087 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.639935 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.640360 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.640575 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.640861 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.641086 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.697063 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.697111 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:53 crc kubenswrapper[4662]: E1208 09:15:53.697443 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:53 crc kubenswrapper[4662]: E1208 09:15:53.697579 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.743779 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.743847 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.743865 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.743894 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.743912 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.847404 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.847721 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.847968 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.848221 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.848422 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.951501 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.951559 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.951580 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.951606 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:53 crc kubenswrapper[4662]: I1208 09:15:53.951630 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:53Z","lastTransitionTime":"2025-12-08T09:15:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.054323 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.054383 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.054406 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.054433 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.054453 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.157134 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.157522 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.157701 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.157941 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.158068 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.261170 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.261231 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.261248 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.261275 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.261294 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.363872 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.364179 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.364241 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.364300 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.364352 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.467227 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.467266 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.467278 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.467295 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.467306 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.570041 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.570074 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.570086 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.570104 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.570117 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.674906 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.674963 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.674975 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.674999 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.675016 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.697252 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.697256 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:54 crc kubenswrapper[4662]: E1208 09:15:54.698118 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:54 crc kubenswrapper[4662]: E1208 09:15:54.698265 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.778683 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.779008 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.779097 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.779183 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.779257 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.881895 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.882425 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.882520 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.882619 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.882729 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.985894 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.986528 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.986627 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.986762 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:54 crc kubenswrapper[4662]: I1208 09:15:54.986859 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:54Z","lastTransitionTime":"2025-12-08T09:15:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.088981 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.089040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.089059 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.089082 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.089102 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.192299 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.192351 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.192362 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.192380 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.192391 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.294638 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.294936 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.295032 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.295128 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.295199 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.397981 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.398012 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.398020 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.398032 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.398042 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.500481 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.500520 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.500533 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.500548 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.500561 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.602911 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.603140 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.603149 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.603163 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.603172 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.697311 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.697446 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:55 crc kubenswrapper[4662]: E1208 09:15:55.697525 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:55 crc kubenswrapper[4662]: E1208 09:15:55.697631 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.707546 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.707600 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.707613 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.707633 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.707646 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.810734 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.810842 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.810864 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.810894 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.810915 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.914424 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.914486 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.914505 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.914529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:55 crc kubenswrapper[4662]: I1208 09:15:55.914547 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:55Z","lastTransitionTime":"2025-12-08T09:15:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.016307 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.016347 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.016355 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.016369 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.016377 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.119000 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.119039 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.119050 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.119066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.119078 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.221959 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.222020 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.222039 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.222061 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.222076 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.325482 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.325533 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.325545 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.325562 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.325574 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.428193 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.428244 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.428257 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.428274 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.428285 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.530620 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.530678 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.530704 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.530801 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.530842 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.633090 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.633347 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.633424 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.633497 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.633560 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.697037 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.697037 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:56 crc kubenswrapper[4662]: E1208 09:15:56.697176 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:56 crc kubenswrapper[4662]: E1208 09:15:56.697292 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.710150 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9c617143-89ca-466c-83a1-ebd101948157\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a199dd10a8d8a2a7788c5c31a11e6a2aa46d73271d32f065ccc144efc67c81af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a0b37548382b18608f951df58067092f5866a5cceb62e0abcefff44e03cd54ee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://84fc22fcc1e843d22fcc4e29869b46df1a476fa4795df8e160a445bf5b3bd9d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c4794e5a3b799a82e5fcbed9c8f0ae09a61c49040325c933784b9a5f9e821273\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.723187 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.736800 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.737106 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.737148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.737189 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.737209 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.737221 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.750898 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-92hkj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"adeadc12-d6e2-4168-a1c0-de79d16c8de9\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:42Z\\\",\\\"message\\\":\\\"2025-12-08T09:14:57+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153\\\\n2025-12-08T09:14:57+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_6513dcbc-dc03-4bf9-ba0a-cbbf3ef36153 to /host/opt/cni/bin/\\\\n2025-12-08T09:14:57Z [verbose] multus-daemon started\\\\n2025-12-08T09:14:57Z [verbose] Readiness Indicator file check\\\\n2025-12-08T09:15:42Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zzhlb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-92hkj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.770571 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"43334393-7653-4743-a4ee-86369676d9fc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://878c67dd9db84908758b0d0cd9f37f87613e57c4f1b9ac68b7edc1542a6dec58\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://614774f7267ae2f3b937e2cbf25eada6f98ee0a6d05da998018e1bbb5acd5a20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0b4455be5ff9e99cf1a2e2fedac4ad531425cb7ae991e73691447891acfebbba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f42e8a5d84e2f6ebc5cb9df542e8a5be9e4e4c8d855eed17d35dff45d47079f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd703e6129ad73ad7f5a482600b251c078fb08ae36ef582fb4f7dfaabe29d4cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8dfedffc74db358189f8f4c6a288c824a0d4cba4dd16f687a0fcc4f0d0530a7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5110213fa147649bc4fba45d4dad4302c34b10d261d854fc12de64a4055ee0dc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5093b4cb77a38607fe92dd63437791a7a49bf9bddd94d4e1c6453a7307007934\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.787713 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea3eddb9-01cf-49f0-81cc-c9534bfe50b6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"ing back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI1208 09:14:49.126201 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1208 09:14:49.135102 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2727831365/tls.crt::/tmp/serving-cert-2727831365/tls.key\\\\\\\"\\\\nI1208 09:14:54.602832 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 09:14:54.616207 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 09:14:54.616235 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 09:14:54.616261 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 09:14:54.616267 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 09:14:54.625577 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW1208 09:14:54.625607 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI1208 09:14:54.625609 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 09:14:54.625614 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 09:14:54.625634 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 09:14:54.625638 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 09:14:54.625643 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 09:14:54.625647 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 09:14:54.630121 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.807597 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d221fdb-50ee-4a2a-9db5-30e79f604466\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-12-08T09:15:50Z\\\",\\\"message\\\":\\\"flector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI1208 09:15:50.641929 6538 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1208 09:15:50.642070 6538 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI1208 09:15:50.642185 6538 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1208 09:15:50.642241 6538 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1208 09:15:50.642258 6538 factory.go:656] Stopping watch factory\\\\nI1208 09:15:50.642271 6538 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1208 09:15:50.642303 6538 handler.go:208] Removed *v1.Node event handler 2\\\\nI1208 09:15:50.650984 6538 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI1208 09:15:50.651010 6538 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI1208 09:15:50.651060 6538 ovnkube.go:599] Stopped ovnkube\\\\nI1208 09:15:50.651093 6538 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF1208 09:15:50.651263 6538 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:49Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zcn6q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-fhz87\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.821018 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"daba0096-67d9-468a-a1fa-97fc0fa45ff1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://49e617f09575ab75bbfc424981b78a1ed7afa1e45bf2f1fff62c660fb137bd59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9767ba76a2888164b532b6336d4eff7fe63a204fbffe352d315b3e00c0579600\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-6gzfd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:08Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-hlk7c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.832320 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e629796-86fa-4436-8a01-326fc70c7dc1\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f4db0dcc290225283d286a5345e304b83564f43a236287debd22efe994c294e0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hgvzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-5dzps\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.839539 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.839566 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.839574 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.839586 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.839595 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.843104 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"39cddafe-53c5-4757-b366-b083a54acd74\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://25b24d328adb0a51b17f9dd1f3034eb12a13b51a854397895c352266067c072d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://229f3bcc344b0dd10629605d6180aa7c8e9ff7f57ea3a7f35da380005dcb2faf\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d350af6b680a78ece794d3ead7f968499bfc7854f6a1dceafa9bce94014ab4ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.856254 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.866276 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56afb5c1546aa244f3ceb72a590bb02953a9861f4fcae557cc682bf4bda86ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.875589 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-wzjpk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f65a592a-5cfd-40ce-9ec9-aa26409e7b74\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://db6aee720e9b7a029818e4f3c20f61276ea95e00fe88b950b4362386bd7db614\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wr5s\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:58Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-wzjpk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.889801 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"42f18be0-5f4b-4e53-ac80-451fbfc548bf\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:09Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bf8kl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:15:09Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hd7m7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.903473 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7fb1b9f431f391cd2f89d1e0e8459479d89648c8172328980bbb0ee8fc62a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.917957 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://455060b858b6523eaebb2cf436a68f76161a814354cf7a0ca59b37ec076de6e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://659faed36b36b8022656662a4d53454e6eb094cb050023f5e803d6a4a8cc22fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.927891 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-xkcpj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fe880ac0-787a-43d5-90a4-5e7fa966f71d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://808dc759d986a93aec3865f7ae20397a5272fda5154db35dec30223cdcfe8922\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-w6tjk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-xkcpj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.941219 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.941255 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.941263 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.941275 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.941284 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:56Z","lastTransitionTime":"2025-12-08T09:15:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:56 crc kubenswrapper[4662]: I1208 09:15:56.944354 4662 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab82f706-db87-4a73-9f90-c1fba510d034\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7877c4b76137a40e02c39c734e9e1b22411852c611ce399cca56bf8f0741d161\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T09:15:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://979b304097eb61d43fa0a1dcdd353dc049a2fc386c58de11fce09924f14148c8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://005acf9082a4b23b0769e815a9cd38f911009a87e135c79e64cb18d7a83fe023\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:57Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://320c383024d6639dbc23b97f8d9fed4c080a36a35f5759ed06153e0c21b82869\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://39bbc23a0a56860b74428c6ada29a1ae149f160e0667584dfa724f87c985470e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:14:59Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:14:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3029deab16da0473972f93bf33aaf05a399e2df2a6ace677cddc9bdccacaada8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b75e67c0cba47f447501c5fb693962f31722b12e1f96fe78a55c3d306392646\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T09:15:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T09:15:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zsqqt\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T09:14:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-g7wsp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:56Z is after 2025-08-24T17:21:41Z" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.043584 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.043645 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.043664 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.043688 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.043707 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.146206 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.146263 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.146282 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.146307 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.146324 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.249207 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.249274 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.249291 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.249315 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.249332 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.352108 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.352173 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.352194 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.352217 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.352234 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.454982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.455013 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.455024 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.455039 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.455049 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.557553 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.557627 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.557639 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.557654 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.557665 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.660615 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.660675 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.660688 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.660708 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.660722 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.697141 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.697178 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:57 crc kubenswrapper[4662]: E1208 09:15:57.697301 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:57 crc kubenswrapper[4662]: E1208 09:15:57.697398 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.764846 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.764903 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.764915 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.764937 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.764951 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.867539 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.867592 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.867605 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.867623 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.867636 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.970224 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.970267 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.970278 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.970295 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:57 crc kubenswrapper[4662]: I1208 09:15:57.970306 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:57Z","lastTransitionTime":"2025-12-08T09:15:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.073165 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.073284 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.073348 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.073375 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.073393 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.176139 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.176177 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.176188 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.176204 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.176218 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.278866 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.278925 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.278943 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.278964 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.279035 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.380560 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.380595 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.380603 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.380616 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.380625 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.482468 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.482503 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.482514 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.482527 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.482537 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.584551 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.584605 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.584619 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.584639 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.584654 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.687028 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.687114 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.687128 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.687145 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.687156 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.697504 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.697510 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.697699 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.697837 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.782590 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.782710 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.782793 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.782835 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.782869 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.782911 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:02.782882661 +0000 UTC m=+146.351910661 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.782947 4662 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.782985 4662 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783017 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:17:02.783002814 +0000 UTC m=+146.352030874 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783039 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-12-08 09:17:02.783027594 +0000 UTC m=+146.352055584 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783061 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783086 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783089 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783166 4662 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783184 4662 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783100 4662 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783273 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-12-08 09:17:02.78324258 +0000 UTC m=+146.352270570 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:58 crc kubenswrapper[4662]: E1208 09:15:58.783311 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-12-08 09:17:02.783292721 +0000 UTC m=+146.352320901 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.789755 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.789796 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.789809 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.789825 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.789839 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.892931 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.892982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.892997 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.893022 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.893036 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.995663 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.995728 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.995772 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.995798 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:58 crc kubenswrapper[4662]: I1208 09:15:58.995820 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:58Z","lastTransitionTime":"2025-12-08T09:15:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.098762 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.098818 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.098837 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.098863 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.098881 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.204968 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.205031 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.205090 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.205128 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.205153 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.308715 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.308838 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.308869 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.308901 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.308926 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.412423 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.412519 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.412531 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.412553 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.412569 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.516824 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.516910 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.516928 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.517397 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.517454 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.619919 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.619968 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.619980 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.619998 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.620019 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.697113 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:15:59 crc kubenswrapper[4662]: E1208 09:15:59.697530 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.697766 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:15:59 crc kubenswrapper[4662]: E1208 09:15:59.697848 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.722331 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.722391 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.722402 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.722418 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.722428 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.824578 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.824621 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.824633 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.824652 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.824662 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.927567 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.927647 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.927670 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.927701 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.927724 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.985779 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.985820 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.985854 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.985871 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:15:59 crc kubenswrapper[4662]: I1208 09:15:59.985882 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:15:59Z","lastTransitionTime":"2025-12-08T09:15:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.001292 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:15:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:15:59Z is after 2025-08-24T17:21:41Z" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.005202 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.005242 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.005251 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.005265 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.005275 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.016685 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:16:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.019942 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.019978 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.019989 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.020005 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.020014 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.030442 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:16:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.033116 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.033140 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.033148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.033159 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.033168 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.045194 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:16:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.048003 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.048043 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.048085 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.048103 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.048114 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.058473 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T09:16:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"8ede791b-f654-4671-ae51-71d01a124d69\\\",\\\"systemUUID\\\":\\\"07345261-8303-4980-8140-240ca8110023\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-12-08T09:16:00Z is after 2025-08-24T17:21:41Z" Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.058577 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.059612 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.059643 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.059655 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.059668 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.059678 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.161869 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.161907 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.161920 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.161936 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.161946 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.263953 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.264066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.264079 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.264096 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.264107 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.366932 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.366974 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.366986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.367070 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.367085 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.470357 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.470400 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.470410 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.470425 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.470439 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.572963 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.573020 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.573029 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.573042 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.573052 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.675055 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.675287 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.675351 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.675416 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.675471 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.700931 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.701126 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.702110 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:00 crc kubenswrapper[4662]: E1208 09:16:00.702387 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.717259 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.778060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.778380 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.778460 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.778544 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.778626 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.880997 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.881038 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.881049 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.881066 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.881077 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.983081 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.983122 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.983132 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.983148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:00 crc kubenswrapper[4662]: I1208 09:16:00.983161 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:00Z","lastTransitionTime":"2025-12-08T09:16:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.085388 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.085619 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.085703 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.085817 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.085908 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.188027 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.188308 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.188324 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.188338 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.188349 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.291080 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.291148 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.291165 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.291188 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.291206 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.393688 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.393754 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.393768 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.393790 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.393805 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.496851 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.496898 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.496910 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.496929 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.496940 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.599704 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.599762 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.599772 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.599787 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.599797 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.696485 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:01 crc kubenswrapper[4662]: E1208 09:16:01.696855 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.696537 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:01 crc kubenswrapper[4662]: E1208 09:16:01.697104 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.702364 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.702398 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.702408 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.702423 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.702435 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.805030 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.805060 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.805068 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.805081 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.805092 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.907713 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.907862 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.907892 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.907913 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:01 crc kubenswrapper[4662]: I1208 09:16:01.907927 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:01Z","lastTransitionTime":"2025-12-08T09:16:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.010621 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.010665 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.010679 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.010696 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.010710 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.113771 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.113818 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.113830 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.113850 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.113864 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.217107 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.217145 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.217156 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.217172 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.217372 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.319268 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.319651 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.319834 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.319973 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.320113 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.423915 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.423980 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.424004 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.424035 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.424122 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.527221 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.527254 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.527262 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.527275 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.527284 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.629561 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.629587 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.629595 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.629608 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.629617 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.696959 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:02 crc kubenswrapper[4662]: E1208 09:16:02.697103 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.697816 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:02 crc kubenswrapper[4662]: E1208 09:16:02.697891 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.732473 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.732537 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.732554 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.732578 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.732594 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.834999 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.835090 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.835114 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.835143 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.835166 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.937568 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.937916 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.938041 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.938179 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:02 crc kubenswrapper[4662]: I1208 09:16:02.938294 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:02Z","lastTransitionTime":"2025-12-08T09:16:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.041637 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.041693 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.041735 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.041801 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.041823 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.144283 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.144306 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.144314 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.144327 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.144336 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.246463 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.246502 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.246513 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.246531 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.246542 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.348260 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.348292 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.348300 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.348312 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.348320 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.450957 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.451019 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.451111 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.451145 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.451223 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.553688 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.553801 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.553837 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.553869 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.553894 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.657193 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.657276 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.657289 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.657311 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.657324 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.696524 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.696524 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:03 crc kubenswrapper[4662]: E1208 09:16:03.696646 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:03 crc kubenswrapper[4662]: E1208 09:16:03.696727 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.761081 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.761132 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.761336 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.761353 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.761371 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.864817 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.864856 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.864864 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.864894 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.864904 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.966502 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.966535 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.966544 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.966558 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:03 crc kubenswrapper[4662]: I1208 09:16:03.966567 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:03Z","lastTransitionTime":"2025-12-08T09:16:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.068604 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.068662 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.068672 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.068712 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.068722 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.172191 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.172245 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.172261 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.172284 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.172301 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.275446 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.275494 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.275507 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.275521 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.275532 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.377581 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.377621 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.377634 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.377651 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.377661 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.479953 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.480029 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.480053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.480078 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.480108 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.582555 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.582589 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.582599 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.582614 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.582626 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.684380 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.684431 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.684491 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.684518 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.684539 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.697087 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:04 crc kubenswrapper[4662]: E1208 09:16:04.697230 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.697090 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:04 crc kubenswrapper[4662]: E1208 09:16:04.697420 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.787116 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.787191 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.787217 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.787246 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.787269 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.890933 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.890999 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.891025 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.891053 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.891077 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.994386 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.994434 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.994445 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.994464 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:04 crc kubenswrapper[4662]: I1208 09:16:04.994477 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:04Z","lastTransitionTime":"2025-12-08T09:16:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.097585 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.097629 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.097637 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.097652 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.097663 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.200416 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.200483 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.200494 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.200512 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.200523 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.303334 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.303367 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.303389 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.303405 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.303415 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.406092 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.406134 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.406144 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.406158 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.406168 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.508152 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.508188 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.508196 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.508210 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.508219 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.610906 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.610960 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.610970 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.610986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.610995 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.697089 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.697095 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:05 crc kubenswrapper[4662]: E1208 09:16:05.697203 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:05 crc kubenswrapper[4662]: E1208 09:16:05.697284 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.713608 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.713653 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.713667 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.713688 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.713704 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.816726 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.816777 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.816787 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.816804 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.816817 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.919259 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.919305 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.919316 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.919336 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:05 crc kubenswrapper[4662]: I1208 09:16:05.919347 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:05Z","lastTransitionTime":"2025-12-08T09:16:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.021871 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.021910 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.021919 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.021934 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.021944 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.124335 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.124388 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.124405 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.124427 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.124474 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.226843 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.226935 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.226954 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.226982 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.227000 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.329495 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.329543 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.329558 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.329576 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.329590 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.432592 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.432647 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.432666 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.432699 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.432715 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.535398 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.535465 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.535479 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.535495 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.535509 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.638999 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.639063 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.639079 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.639102 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.639119 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.696657 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:06 crc kubenswrapper[4662]: E1208 09:16:06.696899 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.697112 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:06 crc kubenswrapper[4662]: E1208 09:16:06.697820 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.698202 4662 scope.go:117] "RemoveContainer" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" Dec 08 09:16:06 crc kubenswrapper[4662]: E1208 09:16:06.698506 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.741357 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.741392 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.741402 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.741420 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.741430 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.742568 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-g7wsp" podStartSLOduration=71.742546367 podStartE2EDuration="1m11.742546367s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.730254582 +0000 UTC m=+90.299282572" watchObservedRunningTime="2025-12-08 09:16:06.742546367 +0000 UTC m=+90.311574357" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.756592 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-wzjpk" podStartSLOduration=72.756567377 podStartE2EDuration="1m12.756567377s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.743355078 +0000 UTC m=+90.312383068" watchObservedRunningTime="2025-12-08 09:16:06.756567377 +0000 UTC m=+90.325595367" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.843620 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.843671 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.843682 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.843698 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.843708 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.845380 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xkcpj" podStartSLOduration=72.845365371 podStartE2EDuration="1m12.845365371s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.811540038 +0000 UTC m=+90.380568038" watchObservedRunningTime="2025-12-08 09:16:06.845365371 +0000 UTC m=+90.414393361" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.873855 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-92hkj" podStartSLOduration=72.873833162 podStartE2EDuration="1m12.873833162s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.845849373 +0000 UTC m=+90.414877373" watchObservedRunningTime="2025-12-08 09:16:06.873833162 +0000 UTC m=+90.442861152" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.892937 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=39.892915686 podStartE2EDuration="39.892915686s" podCreationTimestamp="2025-12-08 09:15:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.874621353 +0000 UTC m=+90.443649343" watchObservedRunningTime="2025-12-08 09:16:06.892915686 +0000 UTC m=+90.461943676" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.946417 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.946452 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.946460 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.946473 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.946483 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:06Z","lastTransitionTime":"2025-12-08T09:16:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.950063 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-hlk7c" podStartSLOduration=71.950048454 podStartE2EDuration="1m11.950048454s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.918556063 +0000 UTC m=+90.487584063" watchObservedRunningTime="2025-12-08 09:16:06.950048454 +0000 UTC m=+90.519076444" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.965942 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=68.965921873 podStartE2EDuration="1m8.965921873s" podCreationTimestamp="2025-12-08 09:14:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.949772527 +0000 UTC m=+90.518800517" watchObservedRunningTime="2025-12-08 09:16:06.965921873 +0000 UTC m=+90.534949863" Dec 08 09:16:06 crc kubenswrapper[4662]: I1208 09:16:06.998537 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=72.998515873 podStartE2EDuration="1m12.998515873s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:06.96583009 +0000 UTC m=+90.534858080" watchObservedRunningTime="2025-12-08 09:16:06.998515873 +0000 UTC m=+90.567543873" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.027575 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podStartSLOduration=73.02755329 podStartE2EDuration="1m13.02755329s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:07.026774909 +0000 UTC m=+90.595802899" watchObservedRunningTime="2025-12-08 09:16:07.02755329 +0000 UTC m=+90.596581280" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.041309 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=7.041287032 podStartE2EDuration="7.041287032s" podCreationTimestamp="2025-12-08 09:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:07.036547457 +0000 UTC m=+90.605575447" watchObservedRunningTime="2025-12-08 09:16:07.041287032 +0000 UTC m=+90.610315022" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.048113 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.048155 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.048168 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.048185 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.048197 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.055857 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=73.055838806 podStartE2EDuration="1m13.055838806s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:07.055674112 +0000 UTC m=+90.624702112" watchObservedRunningTime="2025-12-08 09:16:07.055838806 +0000 UTC m=+90.624866796" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.150727 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.150798 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.150809 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.150826 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.150856 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.253896 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.253973 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.253986 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.254008 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.254023 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.356553 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.356627 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.356657 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.356679 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.356715 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.458917 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.458957 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.458969 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.458987 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.458998 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.561833 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.561891 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.561903 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.561925 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.561944 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.664483 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.664521 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.664529 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.664543 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.664555 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.697400 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:07 crc kubenswrapper[4662]: E1208 09:16:07.697649 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.697892 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:07 crc kubenswrapper[4662]: E1208 09:16:07.698016 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.766836 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.766883 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.766899 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.766922 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.766940 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.869461 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.869559 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.869572 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.869590 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.869606 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.972859 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.972889 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.972899 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.972912 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:07 crc kubenswrapper[4662]: I1208 09:16:07.972922 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:07Z","lastTransitionTime":"2025-12-08T09:16:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.074675 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.074727 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.074756 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.074778 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.074789 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.176970 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.177026 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.177036 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.177056 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.177069 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.279157 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.279201 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.279211 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.279226 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.279237 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.382244 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.382302 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.382315 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.382334 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.382346 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.484663 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.484715 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.484726 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.484761 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.484775 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.587255 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.587309 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.587323 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.587343 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.587358 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.689714 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.689813 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.689828 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.689843 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.689855 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.697074 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.697138 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:08 crc kubenswrapper[4662]: E1208 09:16:08.697212 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:08 crc kubenswrapper[4662]: E1208 09:16:08.697367 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.792171 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.792231 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.792248 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.792269 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.792284 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.895040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.895082 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.895090 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.895107 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.895119 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.997822 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.997866 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.997881 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.997903 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:08 crc kubenswrapper[4662]: I1208 09:16:08.997917 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:08Z","lastTransitionTime":"2025-12-08T09:16:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.101447 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.101503 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.101515 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.101531 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.101545 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.203640 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.203702 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.203719 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.203771 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.203790 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.306808 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.306890 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.306909 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.306935 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.306954 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.409789 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.409823 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.409831 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.409845 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.409854 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.512419 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.512533 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.512554 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.512582 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.512604 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.615316 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.615354 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.615362 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.615378 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.615389 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.697491 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.697575 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:09 crc kubenswrapper[4662]: E1208 09:16:09.697628 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:09 crc kubenswrapper[4662]: E1208 09:16:09.697898 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.718808 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.718849 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.718859 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.718875 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.718887 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.821725 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.821805 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.821818 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.821832 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.821842 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.924502 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.924584 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.924608 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.924639 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:09 crc kubenswrapper[4662]: I1208 09:16:09.924665 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:09Z","lastTransitionTime":"2025-12-08T09:16:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.027391 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.027424 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.027432 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.027444 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.027452 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:10Z","lastTransitionTime":"2025-12-08T09:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.130520 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.130587 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.130611 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.130639 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.130661 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:10Z","lastTransitionTime":"2025-12-08T09:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.233963 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.234025 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.234040 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.234062 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.234076 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:10Z","lastTransitionTime":"2025-12-08T09:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.337571 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.337631 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.337643 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.337663 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.337676 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:10Z","lastTransitionTime":"2025-12-08T09:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.400594 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.400632 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.400640 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.400652 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.400698 4662 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T09:16:10Z","lastTransitionTime":"2025-12-08T09:16:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.451759 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf"] Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.452271 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.454311 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.454421 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.454616 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.455702 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.602052 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae378b6b-22c4-48e3-a481-592a5b52395d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.602106 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae378b6b-22c4-48e3-a481-592a5b52395d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.602125 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae378b6b-22c4-48e3-a481-592a5b52395d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.602146 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae378b6b-22c4-48e3-a481-592a5b52395d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.602159 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae378b6b-22c4-48e3-a481-592a5b52395d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.697105 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.697131 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:10 crc kubenswrapper[4662]: E1208 09:16:10.697252 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:10 crc kubenswrapper[4662]: E1208 09:16:10.697408 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.702541 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae378b6b-22c4-48e3-a481-592a5b52395d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.702588 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae378b6b-22c4-48e3-a481-592a5b52395d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.702606 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae378b6b-22c4-48e3-a481-592a5b52395d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.702628 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae378b6b-22c4-48e3-a481-592a5b52395d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.702643 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae378b6b-22c4-48e3-a481-592a5b52395d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.702658 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/ae378b6b-22c4-48e3-a481-592a5b52395d-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.703145 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/ae378b6b-22c4-48e3-a481-592a5b52395d-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.703535 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/ae378b6b-22c4-48e3-a481-592a5b52395d-service-ca\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.712647 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ae378b6b-22c4-48e3-a481-592a5b52395d-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.719505 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ae378b6b-22c4-48e3-a481-592a5b52395d-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-bgfjf\" (UID: \"ae378b6b-22c4-48e3-a481-592a5b52395d\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.765589 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.986985 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" event={"ID":"ae378b6b-22c4-48e3-a481-592a5b52395d","Type":"ContainerStarted","Data":"eba5391373480eb7a01ec38513a53199110141e31d4a80528c2f12af43685993"} Dec 08 09:16:10 crc kubenswrapper[4662]: I1208 09:16:10.987027 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" event={"ID":"ae378b6b-22c4-48e3-a481-592a5b52395d","Type":"ContainerStarted","Data":"97a2f3ce4fc7346b2ed2208e5d18ade5eb56d5c41f3023dc4321aebaf7ae0241"} Dec 08 09:16:11 crc kubenswrapper[4662]: I1208 09:16:11.003016 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-bgfjf" podStartSLOduration=77.002999179 podStartE2EDuration="1m17.002999179s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:11.00154556 +0000 UTC m=+94.570573560" watchObservedRunningTime="2025-12-08 09:16:11.002999179 +0000 UTC m=+94.572027169" Dec 08 09:16:11 crc kubenswrapper[4662]: I1208 09:16:11.697093 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:11 crc kubenswrapper[4662]: I1208 09:16:11.697169 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:11 crc kubenswrapper[4662]: E1208 09:16:11.697643 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:11 crc kubenswrapper[4662]: E1208 09:16:11.697527 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:12 crc kubenswrapper[4662]: I1208 09:16:12.696894 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:12 crc kubenswrapper[4662]: I1208 09:16:12.696893 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:12 crc kubenswrapper[4662]: E1208 09:16:12.697032 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:12 crc kubenswrapper[4662]: E1208 09:16:12.697153 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:13 crc kubenswrapper[4662]: I1208 09:16:13.228880 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:13 crc kubenswrapper[4662]: E1208 09:16:13.229006 4662 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:16:13 crc kubenswrapper[4662]: E1208 09:16:13.229061 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs podName:42f18be0-5f4b-4e53-ac80-451fbfc548bf nodeName:}" failed. No retries permitted until 2025-12-08 09:17:17.229044958 +0000 UTC m=+160.798072948 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs") pod "network-metrics-daemon-hd7m7" (UID: "42f18be0-5f4b-4e53-ac80-451fbfc548bf") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 09:16:13 crc kubenswrapper[4662]: I1208 09:16:13.697152 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:13 crc kubenswrapper[4662]: I1208 09:16:13.697153 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:13 crc kubenswrapper[4662]: E1208 09:16:13.697312 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:13 crc kubenswrapper[4662]: E1208 09:16:13.702599 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:14 crc kubenswrapper[4662]: I1208 09:16:14.696882 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:14 crc kubenswrapper[4662]: E1208 09:16:14.697058 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:14 crc kubenswrapper[4662]: I1208 09:16:14.696885 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:14 crc kubenswrapper[4662]: E1208 09:16:14.697359 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:15 crc kubenswrapper[4662]: I1208 09:16:15.697410 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:15 crc kubenswrapper[4662]: I1208 09:16:15.697422 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:15 crc kubenswrapper[4662]: E1208 09:16:15.697614 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:15 crc kubenswrapper[4662]: E1208 09:16:15.697865 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:16 crc kubenswrapper[4662]: I1208 09:16:16.697267 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:16 crc kubenswrapper[4662]: I1208 09:16:16.697322 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:16 crc kubenswrapper[4662]: E1208 09:16:16.697402 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:16 crc kubenswrapper[4662]: E1208 09:16:16.697596 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:17 crc kubenswrapper[4662]: I1208 09:16:17.697230 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:17 crc kubenswrapper[4662]: E1208 09:16:17.697363 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:17 crc kubenswrapper[4662]: I1208 09:16:17.697230 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:17 crc kubenswrapper[4662]: E1208 09:16:17.697571 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:18 crc kubenswrapper[4662]: I1208 09:16:18.697353 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:18 crc kubenswrapper[4662]: I1208 09:16:18.697700 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:18 crc kubenswrapper[4662]: E1208 09:16:18.698366 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:18 crc kubenswrapper[4662]: I1208 09:16:18.701026 4662 scope.go:117] "RemoveContainer" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" Dec 08 09:16:18 crc kubenswrapper[4662]: E1208 09:16:18.701261 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:16:18 crc kubenswrapper[4662]: E1208 09:16:18.701562 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:19 crc kubenswrapper[4662]: I1208 09:16:19.697966 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:19 crc kubenswrapper[4662]: I1208 09:16:19.698097 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:19 crc kubenswrapper[4662]: E1208 09:16:19.698203 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:19 crc kubenswrapper[4662]: E1208 09:16:19.698309 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:20 crc kubenswrapper[4662]: I1208 09:16:20.696961 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:20 crc kubenswrapper[4662]: I1208 09:16:20.697000 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:20 crc kubenswrapper[4662]: E1208 09:16:20.697096 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:20 crc kubenswrapper[4662]: E1208 09:16:20.697223 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:21 crc kubenswrapper[4662]: I1208 09:16:21.697062 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:21 crc kubenswrapper[4662]: I1208 09:16:21.697188 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:21 crc kubenswrapper[4662]: E1208 09:16:21.697284 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:21 crc kubenswrapper[4662]: E1208 09:16:21.697333 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:22 crc kubenswrapper[4662]: I1208 09:16:22.697566 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:22 crc kubenswrapper[4662]: E1208 09:16:22.698329 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:22 crc kubenswrapper[4662]: I1208 09:16:22.697717 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:22 crc kubenswrapper[4662]: E1208 09:16:22.698535 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:23 crc kubenswrapper[4662]: I1208 09:16:23.697382 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:23 crc kubenswrapper[4662]: E1208 09:16:23.697793 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:23 crc kubenswrapper[4662]: I1208 09:16:23.697382 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:23 crc kubenswrapper[4662]: E1208 09:16:23.698458 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:24 crc kubenswrapper[4662]: I1208 09:16:24.697156 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:24 crc kubenswrapper[4662]: I1208 09:16:24.697557 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:24 crc kubenswrapper[4662]: E1208 09:16:24.697834 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:24 crc kubenswrapper[4662]: E1208 09:16:24.698232 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:25 crc kubenswrapper[4662]: I1208 09:16:25.696591 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:25 crc kubenswrapper[4662]: I1208 09:16:25.696596 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:25 crc kubenswrapper[4662]: E1208 09:16:25.696823 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:25 crc kubenswrapper[4662]: E1208 09:16:25.696910 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:26 crc kubenswrapper[4662]: I1208 09:16:26.697377 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:26 crc kubenswrapper[4662]: I1208 09:16:26.697448 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:26 crc kubenswrapper[4662]: E1208 09:16:26.698844 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:26 crc kubenswrapper[4662]: E1208 09:16:26.699024 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:27 crc kubenswrapper[4662]: I1208 09:16:27.697345 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:27 crc kubenswrapper[4662]: E1208 09:16:27.697485 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:27 crc kubenswrapper[4662]: I1208 09:16:27.697684 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:27 crc kubenswrapper[4662]: E1208 09:16:27.697793 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:28 crc kubenswrapper[4662]: I1208 09:16:28.696575 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:28 crc kubenswrapper[4662]: E1208 09:16:28.696778 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:28 crc kubenswrapper[4662]: I1208 09:16:28.696975 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:28 crc kubenswrapper[4662]: E1208 09:16:28.697174 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:29 crc kubenswrapper[4662]: I1208 09:16:29.697121 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:29 crc kubenswrapper[4662]: I1208 09:16:29.697121 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:29 crc kubenswrapper[4662]: E1208 09:16:29.698170 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:29 crc kubenswrapper[4662]: E1208 09:16:29.698078 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.049500 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/1.log" Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.050237 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/0.log" Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.050292 4662 generic.go:334] "Generic (PLEG): container finished" podID="adeadc12-d6e2-4168-a1c0-de79d16c8de9" containerID="1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027" exitCode=1 Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.050326 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerDied","Data":"1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027"} Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.050360 4662 scope.go:117] "RemoveContainer" containerID="991db0d105073e53b27a0951a95f0465ec52cbc2e92b35a488cc8a82abc7a08f" Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.050826 4662 scope.go:117] "RemoveContainer" containerID="1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027" Dec 08 09:16:30 crc kubenswrapper[4662]: E1208 09:16:30.051012 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-92hkj_openshift-multus(adeadc12-d6e2-4168-a1c0-de79d16c8de9)\"" pod="openshift-multus/multus-92hkj" podUID="adeadc12-d6e2-4168-a1c0-de79d16c8de9" Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.697541 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.697627 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:30 crc kubenswrapper[4662]: I1208 09:16:30.697785 4662 scope.go:117] "RemoveContainer" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" Dec 08 09:16:30 crc kubenswrapper[4662]: E1208 09:16:30.697837 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:30 crc kubenswrapper[4662]: E1208 09:16:30.697940 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-fhz87_openshift-ovn-kubernetes(8d221fdb-50ee-4a2a-9db5-30e79f604466)\"" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" Dec 08 09:16:30 crc kubenswrapper[4662]: E1208 09:16:30.698336 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:31 crc kubenswrapper[4662]: I1208 09:16:31.056034 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/1.log" Dec 08 09:16:31 crc kubenswrapper[4662]: I1208 09:16:31.697426 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:31 crc kubenswrapper[4662]: I1208 09:16:31.697621 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:31 crc kubenswrapper[4662]: E1208 09:16:31.697727 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:31 crc kubenswrapper[4662]: E1208 09:16:31.697905 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:32 crc kubenswrapper[4662]: I1208 09:16:32.697200 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:32 crc kubenswrapper[4662]: E1208 09:16:32.697413 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:32 crc kubenswrapper[4662]: I1208 09:16:32.697712 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:32 crc kubenswrapper[4662]: E1208 09:16:32.697815 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:33 crc kubenswrapper[4662]: I1208 09:16:33.697139 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:33 crc kubenswrapper[4662]: E1208 09:16:33.697312 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:33 crc kubenswrapper[4662]: I1208 09:16:33.697555 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:33 crc kubenswrapper[4662]: E1208 09:16:33.697635 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:34 crc kubenswrapper[4662]: I1208 09:16:34.696855 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:34 crc kubenswrapper[4662]: I1208 09:16:34.696880 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:34 crc kubenswrapper[4662]: E1208 09:16:34.697053 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:34 crc kubenswrapper[4662]: E1208 09:16:34.697240 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:35 crc kubenswrapper[4662]: I1208 09:16:35.696514 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:35 crc kubenswrapper[4662]: I1208 09:16:35.696546 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:35 crc kubenswrapper[4662]: E1208 09:16:35.696651 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:35 crc kubenswrapper[4662]: E1208 09:16:35.696904 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:36 crc kubenswrapper[4662]: I1208 09:16:36.696949 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:36 crc kubenswrapper[4662]: I1208 09:16:36.697044 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:36 crc kubenswrapper[4662]: E1208 09:16:36.700128 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:36 crc kubenswrapper[4662]: E1208 09:16:36.700317 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:36 crc kubenswrapper[4662]: E1208 09:16:36.700776 4662 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Dec 08 09:16:37 crc kubenswrapper[4662]: E1208 09:16:37.637624 4662 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 08 09:16:37 crc kubenswrapper[4662]: I1208 09:16:37.696811 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:37 crc kubenswrapper[4662]: I1208 09:16:37.696836 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:37 crc kubenswrapper[4662]: E1208 09:16:37.696938 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:37 crc kubenswrapper[4662]: E1208 09:16:37.697021 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:38 crc kubenswrapper[4662]: I1208 09:16:38.697301 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:38 crc kubenswrapper[4662]: I1208 09:16:38.697301 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:38 crc kubenswrapper[4662]: E1208 09:16:38.697461 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:38 crc kubenswrapper[4662]: E1208 09:16:38.697636 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:39 crc kubenswrapper[4662]: I1208 09:16:39.697109 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:39 crc kubenswrapper[4662]: E1208 09:16:39.697249 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:39 crc kubenswrapper[4662]: I1208 09:16:39.697457 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:39 crc kubenswrapper[4662]: E1208 09:16:39.697830 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:40 crc kubenswrapper[4662]: I1208 09:16:40.696630 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:40 crc kubenswrapper[4662]: I1208 09:16:40.696701 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:40 crc kubenswrapper[4662]: E1208 09:16:40.696800 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:40 crc kubenswrapper[4662]: E1208 09:16:40.696879 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:41 crc kubenswrapper[4662]: I1208 09:16:41.696544 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:41 crc kubenswrapper[4662]: I1208 09:16:41.696635 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:41 crc kubenswrapper[4662]: E1208 09:16:41.696893 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:41 crc kubenswrapper[4662]: I1208 09:16:41.697007 4662 scope.go:117] "RemoveContainer" containerID="1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027" Dec 08 09:16:41 crc kubenswrapper[4662]: E1208 09:16:41.697035 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:42 crc kubenswrapper[4662]: I1208 09:16:42.091779 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/1.log" Dec 08 09:16:42 crc kubenswrapper[4662]: I1208 09:16:42.091843 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerStarted","Data":"ee2d7f73f6fa58baaed744f52045b57227c634e502ca8143f4dd2f7011117cbb"} Dec 08 09:16:42 crc kubenswrapper[4662]: E1208 09:16:42.640125 4662 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 08 09:16:42 crc kubenswrapper[4662]: I1208 09:16:42.696735 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:42 crc kubenswrapper[4662]: I1208 09:16:42.696735 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:42 crc kubenswrapper[4662]: E1208 09:16:42.696907 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:42 crc kubenswrapper[4662]: E1208 09:16:42.696959 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:43 crc kubenswrapper[4662]: I1208 09:16:43.697209 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:43 crc kubenswrapper[4662]: E1208 09:16:43.697339 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:43 crc kubenswrapper[4662]: I1208 09:16:43.697210 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:43 crc kubenswrapper[4662]: E1208 09:16:43.697508 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:44 crc kubenswrapper[4662]: I1208 09:16:44.697554 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:44 crc kubenswrapper[4662]: E1208 09:16:44.697707 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:44 crc kubenswrapper[4662]: I1208 09:16:44.697986 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:44 crc kubenswrapper[4662]: E1208 09:16:44.698151 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:45 crc kubenswrapper[4662]: I1208 09:16:45.697147 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:45 crc kubenswrapper[4662]: E1208 09:16:45.697296 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:45 crc kubenswrapper[4662]: I1208 09:16:45.697888 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:45 crc kubenswrapper[4662]: E1208 09:16:45.698011 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:45 crc kubenswrapper[4662]: I1208 09:16:45.698344 4662 scope.go:117] "RemoveContainer" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" Dec 08 09:16:46 crc kubenswrapper[4662]: I1208 09:16:46.105051 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/3.log" Dec 08 09:16:46 crc kubenswrapper[4662]: I1208 09:16:46.107784 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerStarted","Data":"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c"} Dec 08 09:16:46 crc kubenswrapper[4662]: I1208 09:16:46.108682 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:16:46 crc kubenswrapper[4662]: I1208 09:16:46.140203 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podStartSLOduration=111.140185475 podStartE2EDuration="1m51.140185475s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:16:46.13886646 +0000 UTC m=+129.707894470" watchObservedRunningTime="2025-12-08 09:16:46.140185475 +0000 UTC m=+129.709213465" Dec 08 09:16:46 crc kubenswrapper[4662]: I1208 09:16:46.558388 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hd7m7"] Dec 08 09:16:46 crc kubenswrapper[4662]: I1208 09:16:46.558512 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:46 crc kubenswrapper[4662]: E1208 09:16:46.558611 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:46 crc kubenswrapper[4662]: I1208 09:16:46.697396 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:46 crc kubenswrapper[4662]: E1208 09:16:46.698602 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:47 crc kubenswrapper[4662]: E1208 09:16:47.642019 4662 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Dec 08 09:16:47 crc kubenswrapper[4662]: I1208 09:16:47.697272 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:47 crc kubenswrapper[4662]: I1208 09:16:47.697331 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:47 crc kubenswrapper[4662]: I1208 09:16:47.697440 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:47 crc kubenswrapper[4662]: E1208 09:16:47.697440 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:47 crc kubenswrapper[4662]: E1208 09:16:47.697613 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:47 crc kubenswrapper[4662]: E1208 09:16:47.697772 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:48 crc kubenswrapper[4662]: I1208 09:16:48.697011 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:48 crc kubenswrapper[4662]: E1208 09:16:48.697156 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:49 crc kubenswrapper[4662]: I1208 09:16:49.697110 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:49 crc kubenswrapper[4662]: E1208 09:16:49.697529 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:49 crc kubenswrapper[4662]: I1208 09:16:49.697126 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:49 crc kubenswrapper[4662]: E1208 09:16:49.698689 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:49 crc kubenswrapper[4662]: I1208 09:16:49.697107 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:49 crc kubenswrapper[4662]: E1208 09:16:49.699115 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:50 crc kubenswrapper[4662]: I1208 09:16:50.698427 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:50 crc kubenswrapper[4662]: E1208 09:16:50.698596 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Dec 08 09:16:51 crc kubenswrapper[4662]: I1208 09:16:51.697350 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:51 crc kubenswrapper[4662]: I1208 09:16:51.697434 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:51 crc kubenswrapper[4662]: E1208 09:16:51.697549 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Dec 08 09:16:51 crc kubenswrapper[4662]: E1208 09:16:51.697671 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Dec 08 09:16:51 crc kubenswrapper[4662]: I1208 09:16:51.698029 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:51 crc kubenswrapper[4662]: E1208 09:16:51.698333 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hd7m7" podUID="42f18be0-5f4b-4e53-ac80-451fbfc548bf" Dec 08 09:16:51 crc kubenswrapper[4662]: I1208 09:16:51.985331 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:16:52 crc kubenswrapper[4662]: I1208 09:16:52.697608 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:16:52 crc kubenswrapper[4662]: I1208 09:16:52.700502 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 08 09:16:52 crc kubenswrapper[4662]: I1208 09:16:52.702637 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 08 09:16:53 crc kubenswrapper[4662]: I1208 09:16:53.697404 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:16:53 crc kubenswrapper[4662]: I1208 09:16:53.697452 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:16:53 crc kubenswrapper[4662]: I1208 09:16:53.697834 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:16:53 crc kubenswrapper[4662]: I1208 09:16:53.703946 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 08 09:16:53 crc kubenswrapper[4662]: I1208 09:16:53.704472 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 08 09:16:53 crc kubenswrapper[4662]: I1208 09:16:53.709436 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 08 09:16:53 crc kubenswrapper[4662]: I1208 09:16:53.709876 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.012236 4662 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.053378 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5kmdj"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.053804 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.055555 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-xntqx"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.056021 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xntqx" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.057344 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z55dl"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.057919 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.058527 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.058766 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.059397 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.059897 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.061131 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.061304 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.061794 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.061823 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.061937 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.062192 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.062648 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.062922 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.063270 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.063497 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.063892 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mvfcz"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.065003 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.065340 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.068661 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.068900 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.069056 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.069083 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.069108 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.069063 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.069457 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.069473 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.069484 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.074627 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.075843 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.075899 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.075998 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.077300 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.077882 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.078486 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.078701 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.078986 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.079154 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.079962 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.080015 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.086397 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5v5vp"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.090613 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.093690 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xdtn"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.094199 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.094878 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.095523 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.095852 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.096325 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.096827 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.096968 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.097276 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.097426 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.098965 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.099107 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.099209 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.106110 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4z8qv"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.106518 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.106845 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bvwcn"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.107227 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.107797 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.107888 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.108423 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.109187 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.112174 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.109631 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.115721 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.116138 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.116364 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.116623 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.116692 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.121827 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.122026 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.122131 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.122277 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.122979 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.123132 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.123463 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.123839 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.124005 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.124724 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.125594 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.125849 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126073 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126101 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126212 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126312 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126348 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126525 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126575 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126677 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126731 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126526 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.126958 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.127190 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.127632 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.127858 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.128431 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.128478 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6fclc"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.129392 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.129668 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hf98q"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.130337 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.131580 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.132564 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.137202 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-9lp67"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.137603 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nw69"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.132926 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.138096 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.138347 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.140191 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145483 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-image-import-ca\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145519 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsg59\" (UniqueName: \"kubernetes.io/projected/f642958a-3ea0-4b41-81d8-6271c6403194-kube-api-access-vsg59\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145543 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78eaea04-652c-4f09-b08e-8e14147da67d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145559 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/808740e2-7cff-469b-998d-a822737e748f-serving-cert\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145584 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-etcd-serving-ca\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145601 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-etcd-client\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145614 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/048e9b82-3d2f-4eb8-90b8-8a979951b43f-config\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145630 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-config\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145645 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-policies\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145661 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145687 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145701 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-encryption-config\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145713 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-serving-cert\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145729 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dd686df-d808-4ff9-91ea-9ae4e81f16f9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qcvgk\" (UID: \"4dd686df-d808-4ff9-91ea-9ae4e81f16f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145761 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-client-ca\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145775 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2560bbb9-084c-4976-9656-373cfd6aeb69-serving-cert\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145792 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e9f387b-f674-4e88-a4ba-7748c3c817d8-audit-dir\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145810 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-config\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145830 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4ncg\" (UniqueName: \"kubernetes.io/projected/99a0fc62-1689-41f8-9b21-eecb9ac81809-kube-api-access-b4ncg\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145852 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f9723a58-a575-47f6-9c85-fa7a6fc65158-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145873 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145890 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f642958a-3ea0-4b41-81d8-6271c6403194-audit-dir\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145906 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145921 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-service-ca\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145936 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-config\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145951 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145968 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9723a58-a575-47f6-9c85-fa7a6fc65158-serving-cert\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.145984 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146000 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146016 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-service-ca-bundle\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146040 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c7d890b-52b7-4d99-8fba-69dc0c154474-metrics-tls\") pod \"dns-operator-744455d44c-mvfcz\" (UID: \"5c7d890b-52b7-4d99-8fba-69dc0c154474\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146054 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zv62\" (UniqueName: \"kubernetes.io/projected/2445bd4c-02c7-400b-bafb-a693a4da9b7f-kube-api-access-7zv62\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146068 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-audit\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146085 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8h2r\" (UniqueName: \"kubernetes.io/projected/4dd686df-d808-4ff9-91ea-9ae4e81f16f9-kube-api-access-m8h2r\") pod \"cluster-samples-operator-665b6dd947-qcvgk\" (UID: \"4dd686df-d808-4ff9-91ea-9ae4e81f16f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146100 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-config\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146113 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146353 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqfjf\" (UniqueName: \"kubernetes.io/projected/f9723a58-a575-47f6-9c85-fa7a6fc65158-kube-api-access-xqfjf\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146371 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-ca\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146386 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctrlh\" (UniqueName: \"kubernetes.io/projected/6e9f387b-f674-4e88-a4ba-7748c3c817d8-kube-api-access-ctrlh\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146400 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5bqt\" (UniqueName: \"kubernetes.io/projected/7109ee21-7989-491c-8847-edacebb08704-kube-api-access-q5bqt\") pod \"downloads-7954f5f757-xntqx\" (UID: \"7109ee21-7989-491c-8847-edacebb08704\") " pod="openshift-console/downloads-7954f5f757-xntqx" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146414 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146427 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lklc\" (UniqueName: \"kubernetes.io/projected/2560bbb9-084c-4976-9656-373cfd6aeb69-kube-api-access-9lklc\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146443 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146459 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsdfr\" (UniqueName: \"kubernetes.io/projected/36ca08ff-e19c-4f57-aeed-6c98d844439d-kube-api-access-wsdfr\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146472 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/048e9b82-3d2f-4eb8-90b8-8a979951b43f-auth-proxy-config\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146485 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/048e9b82-3d2f-4eb8-90b8-8a979951b43f-machine-approver-tls\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146499 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-audit-policies\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146514 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-config\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146532 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-client-ca\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146546 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vvp28\" (UniqueName: \"kubernetes.io/projected/818d198d-782d-4af8-b2f0-0d752ecb5621-kube-api-access-vvp28\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146561 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-dir\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146575 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146589 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-config\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146602 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a0fc62-1689-41f8-9b21-eecb9ac81809-config\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146615 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78eaea04-652c-4f09-b08e-8e14147da67d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146628 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-client\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146641 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-etcd-client\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146656 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-encryption-config\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146669 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2q2bq\" (UniqueName: \"kubernetes.io/projected/5c7d890b-52b7-4d99-8fba-69dc0c154474-kube-api-access-2q2bq\") pod \"dns-operator-744455d44c-mvfcz\" (UID: \"5c7d890b-52b7-4d99-8fba-69dc0c154474\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146683 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8scm\" (UniqueName: \"kubernetes.io/projected/808740e2-7cff-469b-998d-a822737e748f-kube-api-access-j8scm\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146696 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146718 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7vgl\" (UniqueName: \"kubernetes.io/projected/9480540c-cb7a-4822-9b8d-aeb553b74ab4-kube-api-access-v7vgl\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146733 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqbhz\" (UniqueName: \"kubernetes.io/projected/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-kube-api-access-xqbhz\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146762 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdn8\" (UniqueName: \"kubernetes.io/projected/048e9b82-3d2f-4eb8-90b8-8a979951b43f-kube-api-access-jrdn8\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146776 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78eaea04-652c-4f09-b08e-8e14147da67d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146791 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9480540c-cb7a-4822-9b8d-aeb553b74ab4-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146826 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99a0fc62-1689-41f8-9b21-eecb9ac81809-trusted-ca\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146847 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ca08ff-e19c-4f57-aeed-6c98d844439d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146864 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146881 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146896 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36ca08ff-e19c-4f57-aeed-6c98d844439d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146955 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f642958a-3ea0-4b41-81d8-6271c6403194-node-pullsecrets\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.146997 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlh7z\" (UniqueName: \"kubernetes.io/projected/78eaea04-652c-4f09-b08e-8e14147da67d-kube-api-access-xlh7z\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.147032 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.147049 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.147070 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-serving-cert\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.147091 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99a0fc62-1689-41f8-9b21-eecb9ac81809-serving-cert\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.147109 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-images\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.147125 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.147140 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2445bd4c-02c7-400b-bafb-a693a4da9b7f-serving-cert\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.149766 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.150250 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.155507 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.155840 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.156322 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.156606 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.156887 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.157569 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.157614 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.158407 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.158705 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.158846 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.159016 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.159116 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.163650 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.164287 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.166081 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.167068 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.167077 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.167579 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.167994 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.186876 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.193688 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6"] Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.197057 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.197853 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 08 09:17:01 crc kubenswrapper[4662]: I1208 09:17:01.198662 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-s5f92"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.037063 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.037058 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.037949 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.038641 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.039027 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.039598 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.039941 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.039953 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c7d890b-52b7-4d99-8fba-69dc0c154474-metrics-tls\") pod \"dns-operator-744455d44c-mvfcz\" (UID: \"5c7d890b-52b7-4d99-8fba-69dc0c154474\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040082 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-service-ca-bundle\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040042 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040125 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8h2r\" (UniqueName: \"kubernetes.io/projected/4dd686df-d808-4ff9-91ea-9ae4e81f16f9-kube-api-access-m8h2r\") pod \"cluster-samples-operator-665b6dd947-qcvgk\" (UID: \"4dd686df-d808-4ff9-91ea-9ae4e81f16f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040150 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zv62\" (UniqueName: \"kubernetes.io/projected/2445bd4c-02c7-400b-bafb-a693a4da9b7f-kube-api-access-7zv62\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040174 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-audit\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040200 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqfjf\" (UniqueName: \"kubernetes.io/projected/f9723a58-a575-47f6-9c85-fa7a6fc65158-kube-api-access-xqfjf\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040222 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-config\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040243 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040272 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040294 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-ca\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040313 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctrlh\" (UniqueName: \"kubernetes.io/projected/6e9f387b-f674-4e88-a4ba-7748c3c817d8-kube-api-access-ctrlh\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040334 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5bqt\" (UniqueName: \"kubernetes.io/projected/7109ee21-7989-491c-8847-edacebb08704-kube-api-access-q5bqt\") pod \"downloads-7954f5f757-xntqx\" (UID: \"7109ee21-7989-491c-8847-edacebb08704\") " pod="openshift-console/downloads-7954f5f757-xntqx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040357 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsdfr\" (UniqueName: \"kubernetes.io/projected/36ca08ff-e19c-4f57-aeed-6c98d844439d-kube-api-access-wsdfr\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040380 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lklc\" (UniqueName: \"kubernetes.io/projected/2560bbb9-084c-4976-9656-373cfd6aeb69-kube-api-access-9lklc\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040440 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040464 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-config\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040489 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/048e9b82-3d2f-4eb8-90b8-8a979951b43f-auth-proxy-config\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040520 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/048e9b82-3d2f-4eb8-90b8-8a979951b43f-machine-approver-tls\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040543 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-audit-policies\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040572 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vvp28\" (UniqueName: \"kubernetes.io/projected/818d198d-782d-4af8-b2f0-0d752ecb5621-kube-api-access-vvp28\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040600 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-client-ca\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040624 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-config\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040648 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-dir\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040671 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040698 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a0fc62-1689-41f8-9b21-eecb9ac81809-config\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040723 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78eaea04-652c-4f09-b08e-8e14147da67d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040763 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-client\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040788 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-etcd-client\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040815 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-encryption-config\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040839 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2q2bq\" (UniqueName: \"kubernetes.io/projected/5c7d890b-52b7-4d99-8fba-69dc0c154474-kube-api-access-2q2bq\") pod \"dns-operator-744455d44c-mvfcz\" (UID: \"5c7d890b-52b7-4d99-8fba-69dc0c154474\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040864 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8scm\" (UniqueName: \"kubernetes.io/projected/808740e2-7cff-469b-998d-a822737e748f-kube-api-access-j8scm\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040887 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v7vgl\" (UniqueName: \"kubernetes.io/projected/9480540c-cb7a-4822-9b8d-aeb553b74ab4-kube-api-access-v7vgl\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040912 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040947 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99a0fc62-1689-41f8-9b21-eecb9ac81809-trusted-ca\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040971 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqbhz\" (UniqueName: \"kubernetes.io/projected/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-kube-api-access-xqbhz\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040993 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdn8\" (UniqueName: \"kubernetes.io/projected/048e9b82-3d2f-4eb8-90b8-8a979951b43f-kube-api-access-jrdn8\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041017 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78eaea04-652c-4f09-b08e-8e14147da67d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041037 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9480540c-cb7a-4822-9b8d-aeb553b74ab4-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041059 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ca08ff-e19c-4f57-aeed-6c98d844439d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041082 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041106 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041147 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36ca08ff-e19c-4f57-aeed-6c98d844439d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041168 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f642958a-3ea0-4b41-81d8-6271c6403194-node-pullsecrets\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041189 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-images\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041213 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlh7z\" (UniqueName: \"kubernetes.io/projected/78eaea04-652c-4f09-b08e-8e14147da67d-kube-api-access-xlh7z\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041237 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041261 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041283 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-serving-cert\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041308 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99a0fc62-1689-41f8-9b21-eecb9ac81809-serving-cert\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041333 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041355 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2445bd4c-02c7-400b-bafb-a693a4da9b7f-serving-cert\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041402 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-image-import-ca\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041427 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsg59\" (UniqueName: \"kubernetes.io/projected/f642958a-3ea0-4b41-81d8-6271c6403194-kube-api-access-vsg59\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041454 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78eaea04-652c-4f09-b08e-8e14147da67d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041479 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/808740e2-7cff-469b-998d-a822737e748f-serving-cert\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041513 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-etcd-serving-ca\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041544 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-etcd-client\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041567 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/048e9b82-3d2f-4eb8-90b8-8a979951b43f-config\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041588 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-config\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041611 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-policies\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041633 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041656 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041678 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-encryption-config\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041701 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-serving-cert\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041702 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-service-ca-bundle\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.041724 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dd686df-d808-4ff9-91ea-9ae4e81f16f9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qcvgk\" (UID: \"4dd686df-d808-4ff9-91ea-9ae4e81f16f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.040074 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.042671 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.043498 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.044249 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.044716 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.047475 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049626 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-config\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049708 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e9f387b-f674-4e88-a4ba-7748c3c817d8-audit-dir\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049764 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-client-ca\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049790 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2560bbb9-084c-4976-9656-373cfd6aeb69-serving-cert\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049818 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-config\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049847 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4ncg\" (UniqueName: \"kubernetes.io/projected/99a0fc62-1689-41f8-9b21-eecb9ac81809-kube-api-access-b4ncg\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049872 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f9723a58-a575-47f6-9c85-fa7a6fc65158-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049885 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049898 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049922 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049949 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f642958a-3ea0-4b41-81d8-6271c6403194-audit-dir\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.049977 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.050001 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-service-ca\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.050025 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-config\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.050051 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.050075 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9723a58-a575-47f6-9c85-fa7a6fc65158-serving-cert\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.050101 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.050533 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/6e9f387b-f674-4e88-a4ba-7748c3c817d8-audit-dir\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.050758 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.051786 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.052940 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.053696 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-config\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.053868 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-client-ca\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.054317 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.054476 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.055363 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f642958a-3ea0-4b41-81d8-6271c6403194-audit-dir\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.057476 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-dir\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.058449 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/4dd686df-d808-4ff9-91ea-9ae4e81f16f9-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-qcvgk\" (UID: \"4dd686df-d808-4ff9-91ea-9ae4e81f16f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.058487 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-audit\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.059254 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-config\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.059984 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-serving-cert\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.060626 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-config\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.060679 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-ca\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.060981 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-config\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.061589 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99a0fc62-1689-41f8-9b21-eecb9ac81809-config\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.069553 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99a0fc62-1689-41f8-9b21-eecb9ac81809-serving-cert\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.070359 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-etcd-client\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.071266 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.071289 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/048e9b82-3d2f-4eb8-90b8-8a979951b43f-machine-approver-tls\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.071506 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.071600 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.072930 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-service-ca\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.073302 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/f9723a58-a575-47f6-9c85-fa7a6fc65158-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.073355 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/048e9b82-3d2f-4eb8-90b8-8a979951b43f-auth-proxy-config\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.073602 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.074154 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-audit-policies\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.077244 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.078306 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.078468 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.078570 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.078696 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.078850 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.079092 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.079212 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.079323 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.079418 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.079547 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.079657 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.079768 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.080056 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.082194 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5c7d890b-52b7-4d99-8fba-69dc0c154474-metrics-tls\") pod \"dns-operator-744455d44c-mvfcz\" (UID: \"5c7d890b-52b7-4d99-8fba-69dc0c154474\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.087357 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.087904 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/048e9b82-3d2f-4eb8-90b8-8a979951b43f-config\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.088673 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-client-ca\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.089182 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2560bbb9-084c-4976-9656-373cfd6aeb69-serving-cert\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.093168 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-config\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.094166 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-trusted-ca-bundle\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.094674 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-policies\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.095010 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.095349 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2445bd4c-02c7-400b-bafb-a693a4da9b7f-serving-cert\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.095478 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.095861 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f9723a58-a575-47f6-9c85-fa7a6fc65158-serving-cert\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.096514 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/99a0fc62-1689-41f8-9b21-eecb9ac81809-trusted-ca\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.097677 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.099915 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.101167 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/36ca08ff-e19c-4f57-aeed-6c98d844439d-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.101305 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f642958a-3ea0-4b41-81d8-6271c6403194-node-pullsecrets\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.102660 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/808740e2-7cff-469b-998d-a822737e748f-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.119052 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-encryption-config\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.103893 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.117852 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9f387b-f674-4e88-a4ba-7748c3c817d8-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.118485 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-images\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.119237 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.103072 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2445bd4c-02c7-400b-bafb-a693a4da9b7f-etcd-client\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.119946 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-image-import-ca\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.120429 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-etcd-client\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.120501 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.121684 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/78eaea04-652c-4f09-b08e-8e14147da67d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.122027 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/78eaea04-652c-4f09-b08e-8e14147da67d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.122569 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f642958a-3ea0-4b41-81d8-6271c6403194-etcd-serving-ca\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.124216 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.124691 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.125601 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/36ca08ff-e19c-4f57-aeed-6c98d844439d-config\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.126877 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9480540c-cb7a-4822-9b8d-aeb553b74ab4-serving-cert\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.127729 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.129944 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.127860 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dgxkn"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.136584 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.136888 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.138108 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.138705 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.148040 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/6e9f387b-f674-4e88-a4ba-7748c3c817d8-encryption-config\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.148658 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.145551 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/808740e2-7cff-469b-998d-a822737e748f-serving-cert\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.153069 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f642958a-3ea0-4b41-81d8-6271c6403194-serving-cert\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.153378 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.157046 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.157528 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.158418 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159086 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159208 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159317 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159414 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159508 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159628 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159717 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159843 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.159954 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.160524 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.161303 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.162026 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.162418 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.162579 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.162795 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.163460 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.163803 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.164051 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.167084 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.168036 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.169159 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zv62\" (UniqueName: \"kubernetes.io/projected/2445bd4c-02c7-400b-bafb-a693a4da9b7f-kube-api-access-7zv62\") pod \"etcd-operator-b45778765-4z8qv\" (UID: \"2445bd4c-02c7-400b-bafb-a693a4da9b7f\") " pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.169597 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.169653 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8h2r\" (UniqueName: \"kubernetes.io/projected/4dd686df-d808-4ff9-91ea-9ae4e81f16f9-kube-api-access-m8h2r\") pod \"cluster-samples-operator-665b6dd947-qcvgk\" (UID: \"4dd686df-d808-4ff9-91ea-9ae4e81f16f9\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.171182 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.175358 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.175539 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xntqx"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.175570 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.176352 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.176971 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5bqt\" (UniqueName: \"kubernetes.io/projected/7109ee21-7989-491c-8847-edacebb08704-kube-api-access-q5bqt\") pod \"downloads-7954f5f757-xntqx\" (UID: \"7109ee21-7989-491c-8847-edacebb08704\") " pod="openshift-console/downloads-7954f5f757-xntqx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.177114 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z6pb2"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.177989 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.178668 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-djbrv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.179238 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.180276 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.180935 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.181705 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.182916 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5v5vp"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.184593 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.185727 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.186649 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.187501 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.187519 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.189551 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.194096 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsdfr\" (UniqueName: \"kubernetes.io/projected/36ca08ff-e19c-4f57-aeed-6c98d844439d-kube-api-access-wsdfr\") pod \"openshift-apiserver-operator-796bbdcf4f-jmk4m\" (UID: \"36ca08ff-e19c-4f57-aeed-6c98d844439d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.202038 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5kmdj"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.203617 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z55dl"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.203844 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.205186 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.216944 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6fclc"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.219824 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lklc\" (UniqueName: \"kubernetes.io/projected/2560bbb9-084c-4976-9656-373cfd6aeb69-kube-api-access-9lklc\") pod \"controller-manager-879f6c89f-hf98q\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.221075 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.221899 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.222112 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bvwcn"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.223089 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7t9mp"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.224113 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-n5dmv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.224603 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.224929 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.225201 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-2gcvz"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.225668 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.226136 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-lv65h"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.226574 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.226946 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqfjf\" (UniqueName: \"kubernetes.io/projected/f9723a58-a575-47f6-9c85-fa7a6fc65158-kube-api-access-xqfjf\") pod \"openshift-config-operator-7777fb866f-z55dl\" (UID: \"f9723a58-a575-47f6-9c85-fa7a6fc65158\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.228450 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4z8qv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.229623 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.230839 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.234792 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mvfcz"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.236087 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.243225 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4ncg\" (UniqueName: \"kubernetes.io/projected/99a0fc62-1689-41f8-9b21-eecb9ac81809-kube-api-access-b4ncg\") pod \"console-operator-58897d9998-5v5vp\" (UID: \"99a0fc62-1689-41f8-9b21-eecb9ac81809\") " pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.243320 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.247666 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.248847 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.257607 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8scm\" (UniqueName: \"kubernetes.io/projected/808740e2-7cff-469b-998d-a822737e748f-kube-api-access-j8scm\") pod \"authentication-operator-69f744f599-bvwcn\" (UID: \"808740e2-7cff-469b-998d-a822737e748f\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.257699 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xdtn"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.259899 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.264367 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hf98q"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.268614 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.270252 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.271649 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lp67"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.273024 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nw69"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.274569 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.274916 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctrlh\" (UniqueName: \"kubernetes.io/projected/6e9f387b-f674-4e88-a4ba-7748c3c817d8-kube-api-access-ctrlh\") pod \"apiserver-7bbb656c7d-fbwnt\" (UID: \"6e9f387b-f674-4e88-a4ba-7748c3c817d8\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.275669 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-djbrv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.276072 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.278497 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dgxkn"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.279634 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.281003 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.282353 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.283689 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.285278 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.287185 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.287687 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z6pb2"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.289532 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.290081 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.290791 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v7vgl\" (UniqueName: \"kubernetes.io/projected/9480540c-cb7a-4822-9b8d-aeb553b74ab4-kube-api-access-v7vgl\") pod \"route-controller-manager-6576b87f9c-q7vpv\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.291447 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.293118 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.294359 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lv65h"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.295109 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.295990 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2gcvz"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.296917 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7t9mp"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.305988 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-xntqx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.310588 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/78eaea04-652c-4f09-b08e-8e14147da67d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.310716 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.327590 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.334100 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2q2bq\" (UniqueName: \"kubernetes.io/projected/5c7d890b-52b7-4d99-8fba-69dc0c154474-kube-api-access-2q2bq\") pod \"dns-operator-744455d44c-mvfcz\" (UID: \"5c7d890b-52b7-4d99-8fba-69dc0c154474\") " pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.337818 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.353266 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdn8\" (UniqueName: \"kubernetes.io/projected/048e9b82-3d2f-4eb8-90b8-8a979951b43f-kube-api-access-jrdn8\") pod \"machine-approver-56656f9798-8jcgg\" (UID: \"048e9b82-3d2f-4eb8-90b8-8a979951b43f\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.356918 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.370107 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.381048 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vvp28\" (UniqueName: \"kubernetes.io/projected/818d198d-782d-4af8-b2f0-0d752ecb5621-kube-api-access-vvp28\") pod \"oauth-openshift-558db77b4-5xdtn\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: W1208 09:17:02.383150 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod048e9b82_3d2f_4eb8_90b8_8a979951b43f.slice/crio-4ea2694fefc0db7d7eb941ef6ed223fd6dc44a5cec508074259954a51aa4d543 WatchSource:0}: Error finding container 4ea2694fefc0db7d7eb941ef6ed223fd6dc44a5cec508074259954a51aa4d543: Status 404 returned error can't find the container with id 4ea2694fefc0db7d7eb941ef6ed223fd6dc44a5cec508074259954a51aa4d543 Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.416810 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsg59\" (UniqueName: \"kubernetes.io/projected/f642958a-3ea0-4b41-81d8-6271c6403194-kube-api-access-vsg59\") pod \"apiserver-76f77b778f-6fclc\" (UID: \"f642958a-3ea0-4b41-81d8-6271c6403194\") " pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.433418 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqbhz\" (UniqueName: \"kubernetes.io/projected/637eec7a-5d24-47b7-a111-ceaf0a27ebc1-kube-api-access-xqbhz\") pod \"machine-api-operator-5694c8668f-5kmdj\" (UID: \"637eec7a-5d24-47b7-a111-ceaf0a27ebc1\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.451508 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlh7z\" (UniqueName: \"kubernetes.io/projected/78eaea04-652c-4f09-b08e-8e14147da67d-kube-api-access-xlh7z\") pod \"cluster-image-registry-operator-dc59b4c8b-crzkv\" (UID: \"78eaea04-652c-4f09-b08e-8e14147da67d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.461370 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.476721 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479131 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-metrics-certs\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479170 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adea0714-95c0-4a54-bdef-2e645836fcc0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479285 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-registry-certificates\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479317 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479343 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-trusted-ca\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479370 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcgcg\" (UniqueName: \"kubernetes.io/projected/adea0714-95c0-4a54-bdef-2e645836fcc0-kube-api-access-xcgcg\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479396 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87f08450-5929-4441-88f4-fbaec18e0f73-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479420 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87f08450-5929-4441-88f4-fbaec18e0f73-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479442 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqw4z\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-kube-api-access-xqw4z\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479461 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479498 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-serving-cert\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479571 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-stats-auth\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479627 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479644 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/764fd12c-1de2-4dea-8616-52ea0eff48a4-config\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479717 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-trusted-ca-bundle\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479825 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj55b\" (UniqueName: \"kubernetes.io/projected/28226390-eaa7-48f5-8886-b50a88f4b37c-kube-api-access-zj55b\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479861 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fksvb\" (UniqueName: \"kubernetes.io/projected/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-kube-api-access-fksvb\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479875 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adea0714-95c0-4a54-bdef-2e645836fcc0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479931 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-metrics-tls\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479951 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28226390-eaa7-48f5-8886-b50a88f4b37c-service-ca-bundle\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.479971 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-service-ca\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480025 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-registry-tls\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480048 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-oauth-serving-cert\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480138 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-config\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480159 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/764fd12c-1de2-4dea-8616-52ea0eff48a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480278 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-bound-sa-token\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480312 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-trusted-ca\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480382 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480429 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-oauth-config\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480479 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764fd12c-1de2-4dea-8616-52ea0eff48a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480519 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-default-certificate\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480543 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.480605 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzcl5\" (UniqueName: \"kubernetes.io/projected/a0cf72db-464e-4859-bec6-0e3d456e10aa-kube-api-access-qzcl5\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: E1208 09:17:02.481388 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:02.981373302 +0000 UTC m=+146.550401312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.489635 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.496339 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.517417 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.521426 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.528496 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.542525 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.556464 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.579897 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.581983 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.582693 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8b7r\" (UniqueName: \"kubernetes.io/projected/2c0fec9c-e9eb-4582-ae30-f34569a04270-kube-api-access-f8b7r\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.582835 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg9c2\" (UniqueName: \"kubernetes.io/projected/20d06f62-0413-4b18-9e01-05932b0a663b-kube-api-access-fg9c2\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583094 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764fd12c-1de2-4dea-8616-52ea0eff48a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583229 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-default-certificate\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583339 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/93b6c2c1-696e-458b-91b4-61469e7b4571-tmpfs\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583457 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccjcl\" (UniqueName: \"kubernetes.io/projected/53d2503b-f0bf-4fd0-81d7-fded86c71fa4-kube-api-access-ccjcl\") pod \"package-server-manager-789f6589d5-sgt6f\" (UID: \"53d2503b-f0bf-4fd0-81d7-fded86c71fa4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583577 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsfr7\" (UniqueName: \"kubernetes.io/projected/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-kube-api-access-lsfr7\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583707 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93b6c2c1-696e-458b-91b4-61469e7b4571-apiservice-cert\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583859 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aa6f057e-f08e-4540-ac22-5513a5e52b0a-certs\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.583967 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-socket-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.584078 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a49a2742-5f89-4a17-a477-dffb8db27f9c-secret-volume\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.584180 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c0fec9c-e9eb-4582-ae30-f34569a04270-srv-cert\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.584422 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-metrics-certs\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: E1208 09:17:02.584498 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.084478952 +0000 UTC m=+146.653506952 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.584650 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmpj7\" (UniqueName: \"kubernetes.io/projected/bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88-kube-api-access-dmpj7\") pod \"multus-admission-controller-857f4d67dd-dgxkn\" (UID: \"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.584889 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dgxkn\" (UID: \"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.585008 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.585132 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-registry-certificates\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: E1208 09:17:02.587867 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.087850834 +0000 UTC m=+146.656878824 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.587993 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-default-certificate\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.589155 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/24432dab-77eb-4239-a3d1-a5ea2b818093-srv-cert\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.589312 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53d2503b-f0bf-4fd0-81d7-fded86c71fa4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sgt6f\" (UID: \"53d2503b-f0bf-4fd0-81d7-fded86c71fa4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.589429 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6369a891-1146-441f-88c0-791540d2651d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.591202 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-registry-certificates\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.592073 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-metrics-certs\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.593239 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38482018-5576-4004-8774-87c055a7e8bc-serving-cert\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595093 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqw4z\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-kube-api-access-xqw4z\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595112 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcgcg\" (UniqueName: \"kubernetes.io/projected/adea0714-95c0-4a54-bdef-2e645836fcc0-kube-api-access-xcgcg\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595132 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595160 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87f08450-5929-4441-88f4-fbaec18e0f73-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595177 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595195 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-config-volume\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595223 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/24432dab-77eb-4239-a3d1-a5ea2b818093-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595260 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7bf7\" (UniqueName: \"kubernetes.io/projected/6369a891-1146-441f-88c0-791540d2651d-kube-api-access-n7bf7\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595281 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9l4m9\" (UID: \"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595298 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/673ff0b2-abac-4934-bd97-39d8331cd3bb-signing-key\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595313 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c0fec9c-e9eb-4582-ae30-f34569a04270-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595327 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-csi-data-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595347 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595376 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472b623f-a6ec-4145-badb-4e46644c7413-config\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595397 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zj55b\" (UniqueName: \"kubernetes.io/projected/28226390-eaa7-48f5-8886-b50a88f4b37c-kube-api-access-zj55b\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595412 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-842qc\" (UniqueName: \"kubernetes.io/projected/aa6f057e-f08e-4540-ac22-5513a5e52b0a-kube-api-access-842qc\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595429 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adea0714-95c0-4a54-bdef-2e645836fcc0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595468 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-oauth-serving-cert\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595492 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-registry-tls\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595507 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-config\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595522 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/673ff0b2-abac-4934-bd97-39d8331cd3bb-signing-cabundle\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595540 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lbv\" (UniqueName: \"kubernetes.io/projected/ecab2533-ce9e-4471-8a5e-749235846f79-kube-api-access-p5lbv\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595555 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5png9\" (UniqueName: \"kubernetes.io/projected/f1264999-01bb-41ef-b2ae-9c0d7e1f4f15-kube-api-access-5png9\") pod \"migrator-59844c95c7-z5brr\" (UID: \"f1264999-01bb-41ef-b2ae-9c0d7e1f4f15\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595574 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-trusted-ca\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595602 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595623 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjn8c\" (UniqueName: \"kubernetes.io/projected/c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70-kube-api-access-vjn8c\") pod \"control-plane-machine-set-operator-78cbb6b69f-9l4m9\" (UID: \"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595646 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ecab2533-ce9e-4471-8a5e-749235846f79-images\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595673 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/472b623f-a6ec-4145-badb-4e46644c7413-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595693 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8wsf\" (UniqueName: \"kubernetes.io/projected/93b6c2c1-696e-458b-91b4-61469e7b4571-kube-api-access-t8wsf\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595709 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j84xm\" (UniqueName: \"kubernetes.io/projected/a49a2742-5f89-4a17-a477-dffb8db27f9c-kube-api-access-j84xm\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595726 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595771 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzcl5\" (UniqueName: \"kubernetes.io/projected/a0cf72db-464e-4859-bec6-0e3d456e10aa-kube-api-access-qzcl5\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595788 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adea0714-95c0-4a54-bdef-2e645836fcc0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595803 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-plugins-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595834 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595853 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvsbl\" (UniqueName: \"kubernetes.io/projected/673ff0b2-abac-4934-bd97-39d8331cd3bb-kube-api-access-tvsbl\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595867 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvxv9\" (UniqueName: \"kubernetes.io/projected/24432dab-77eb-4239-a3d1-a5ea2b818093-kube-api-access-hvxv9\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595894 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-trusted-ca\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595910 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87f08450-5929-4441-88f4-fbaec18e0f73-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595925 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a49a2742-5f89-4a17-a477-dffb8db27f9c-config-volume\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595959 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-serving-cert\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595973 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6369a891-1146-441f-88c0-791540d2651d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.595989 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-stats-auth\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.596004 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-metrics-tls\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.596029 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a-cert\") pod \"ingress-canary-lv65h\" (UID: \"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a\") " pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.596054 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38482018-5576-4004-8774-87c055a7e8bc-config\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.599849 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-mountpoint-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.599893 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksxf8\" (UniqueName: \"kubernetes.io/projected/63c64b29-5b57-481f-b4b2-c92498738c8a-kube-api-access-ksxf8\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.599922 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/764fd12c-1de2-4dea-8616-52ea0eff48a4-config\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.599943 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-trusted-ca-bundle\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.599961 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecab2533-ce9e-4471-8a5e-749235846f79-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.599978 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-proxy-tls\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.599999 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472b623f-a6ec-4145-badb-4e46644c7413-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600020 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecab2533-ce9e-4471-8a5e-749235846f79-proxy-tls\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600039 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fksvb\" (UniqueName: \"kubernetes.io/projected/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-kube-api-access-fksvb\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600057 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-metrics-tls\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600074 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lq5g\" (UniqueName: \"kubernetes.io/projected/22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a-kube-api-access-9lq5g\") pod \"ingress-canary-lv65h\" (UID: \"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a\") " pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600092 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28226390-eaa7-48f5-8886-b50a88f4b37c-service-ca-bundle\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600106 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-service-ca\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600126 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/764fd12c-1de2-4dea-8616-52ea0eff48a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600142 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zsrrh\" (UniqueName: \"kubernetes.io/projected/38482018-5576-4004-8774-87c055a7e8bc-kube-api-access-zsrrh\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600171 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aa6f057e-f08e-4540-ac22-5513a5e52b0a-node-bootstrap-token\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600187 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2pv\" (UniqueName: \"kubernetes.io/projected/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-kube-api-access-2h2pv\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600217 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-oauth-config\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600233 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-registration-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600248 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93b6c2c1-696e-458b-91b4-61469e7b4571-webhook-cert\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600275 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-bound-sa-token\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.600291 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.592674 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.601457 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/adea0714-95c0-4a54-bdef-2e645836fcc0-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.601943 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/764fd12c-1de2-4dea-8616-52ea0eff48a4-config\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.602496 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-trusted-ca\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.602705 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-trusted-ca-bundle\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.605219 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87f08450-5929-4441-88f4-fbaec18e0f73-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.605670 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.606004 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.609760 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-oauth-serving-cert\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.611941 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.613832 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.613500 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.614784 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28226390-eaa7-48f5-8886-b50a88f4b37c-service-ca-bundle\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.615326 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-service-ca\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.615922 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-trusted-ca\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.616137 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-config\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.616545 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-registry-tls\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.618325 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/764fd12c-1de2-4dea-8616-52ea0eff48a4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.622592 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.623110 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.624042 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87f08450-5929-4441-88f4-fbaec18e0f73-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.629275 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-metrics-tls\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.629936 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/adea0714-95c0-4a54-bdef-2e645836fcc0-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.637299 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-oauth-config\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.641835 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.658176 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.660450 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/28226390-eaa7-48f5-8886-b50a88f4b37c-stats-auth\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.666563 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-serving-cert\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.677987 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.686512 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.701456 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.702767 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:02 crc kubenswrapper[4662]: E1208 09:17:02.703033 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.203013582 +0000 UTC m=+146.772041582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.704814 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lq5g\" (UniqueName: \"kubernetes.io/projected/22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a-kube-api-access-9lq5g\") pod \"ingress-canary-lv65h\" (UID: \"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a\") " pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.704945 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zsrrh\" (UniqueName: \"kubernetes.io/projected/38482018-5576-4004-8774-87c055a7e8bc-kube-api-access-zsrrh\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705072 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aa6f057e-f08e-4540-ac22-5513a5e52b0a-node-bootstrap-token\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705190 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h2pv\" (UniqueName: \"kubernetes.io/projected/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-kube-api-access-2h2pv\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705308 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93b6c2c1-696e-458b-91b4-61469e7b4571-webhook-cert\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705472 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-registration-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705593 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fg9c2\" (UniqueName: \"kubernetes.io/projected/20d06f62-0413-4b18-9e01-05932b0a663b-kube-api-access-fg9c2\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705706 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8b7r\" (UniqueName: \"kubernetes.io/projected/2c0fec9c-e9eb-4582-ae30-f34569a04270-kube-api-access-f8b7r\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705865 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/93b6c2c1-696e-458b-91b4-61469e7b4571-tmpfs\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.705985 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-registration-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706094 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccjcl\" (UniqueName: \"kubernetes.io/projected/53d2503b-f0bf-4fd0-81d7-fded86c71fa4-kube-api-access-ccjcl\") pod \"package-server-manager-789f6589d5-sgt6f\" (UID: \"53d2503b-f0bf-4fd0-81d7-fded86c71fa4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706208 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsfr7\" (UniqueName: \"kubernetes.io/projected/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-kube-api-access-lsfr7\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706329 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93b6c2c1-696e-458b-91b4-61469e7b4571-apiservice-cert\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706444 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aa6f057e-f08e-4540-ac22-5513a5e52b0a-certs\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706570 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-socket-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706675 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a49a2742-5f89-4a17-a477-dffb8db27f9c-secret-volume\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706820 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c0fec9c-e9eb-4582-ae30-f34569a04270-srv-cert\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.706937 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmpj7\" (UniqueName: \"kubernetes.io/projected/bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88-kube-api-access-dmpj7\") pod \"multus-admission-controller-857f4d67dd-dgxkn\" (UID: \"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.707091 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dgxkn\" (UID: \"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.707252 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.707384 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/24432dab-77eb-4239-a3d1-a5ea2b818093-srv-cert\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.707493 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53d2503b-f0bf-4fd0-81d7-fded86c71fa4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sgt6f\" (UID: \"53d2503b-f0bf-4fd0-81d7-fded86c71fa4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.707610 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6369a891-1146-441f-88c0-791540d2651d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.707765 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38482018-5576-4004-8774-87c055a7e8bc-serving-cert\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.707193 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-socket-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: E1208 09:17:02.707951 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.207940376 +0000 UTC m=+146.776968366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.708300 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6369a891-1146-441f-88c0-791540d2651d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.708317 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.708566 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-config-volume\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.708699 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/24432dab-77eb-4239-a3d1-a5ea2b818093-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.709160 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9l4m9\" (UID: \"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.709487 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7bf7\" (UniqueName: \"kubernetes.io/projected/6369a891-1146-441f-88c0-791540d2651d-kube-api-access-n7bf7\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.711250 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/673ff0b2-abac-4934-bd97-39d8331cd3bb-signing-key\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.711435 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c0fec9c-e9eb-4582-ae30-f34569a04270-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.711629 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-csi-data-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.711774 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472b623f-a6ec-4145-badb-4e46644c7413-config\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.711917 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-842qc\" (UniqueName: \"kubernetes.io/projected/aa6f057e-f08e-4540-ac22-5513a5e52b0a-kube-api-access-842qc\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.712064 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/673ff0b2-abac-4934-bd97-39d8331cd3bb-signing-cabundle\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.712199 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lbv\" (UniqueName: \"kubernetes.io/projected/ecab2533-ce9e-4471-8a5e-749235846f79-kube-api-access-p5lbv\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.712443 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5png9\" (UniqueName: \"kubernetes.io/projected/f1264999-01bb-41ef-b2ae-9c0d7e1f4f15-kube-api-access-5png9\") pod \"migrator-59844c95c7-z5brr\" (UID: \"f1264999-01bb-41ef-b2ae-9c0d7e1f4f15\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.712608 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.712725 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjn8c\" (UniqueName: \"kubernetes.io/projected/c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70-kube-api-access-vjn8c\") pod \"control-plane-machine-set-operator-78cbb6b69f-9l4m9\" (UID: \"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.712892 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ecab2533-ce9e-4471-8a5e-749235846f79-images\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.713016 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/472b623f-a6ec-4145-badb-4e46644c7413-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.713877 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8wsf\" (UniqueName: \"kubernetes.io/projected/93b6c2c1-696e-458b-91b4-61469e7b4571-kube-api-access-t8wsf\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.713158 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/472b623f-a6ec-4145-badb-4e46644c7413-config\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.713711 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/93b6c2c1-696e-458b-91b4-61469e7b4571-tmpfs\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.713483 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-csi-data-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.714341 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/24432dab-77eb-4239-a3d1-a5ea2b818093-profile-collector-cert\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.714480 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-dgxkn\" (UID: \"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.714489 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j84xm\" (UniqueName: \"kubernetes.io/projected/a49a2742-5f89-4a17-a477-dffb8db27f9c-kube-api-access-j84xm\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.714731 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-plugins-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.714885 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.715015 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvsbl\" (UniqueName: \"kubernetes.io/projected/673ff0b2-abac-4934-bd97-39d8331cd3bb-kube-api-access-tvsbl\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.715945 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvxv9\" (UniqueName: \"kubernetes.io/projected/24432dab-77eb-4239-a3d1-a5ea2b818093-kube-api-access-hvxv9\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716081 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a49a2742-5f89-4a17-a477-dffb8db27f9c-config-volume\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716227 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6369a891-1146-441f-88c0-791540d2651d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716341 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a49a2742-5f89-4a17-a477-dffb8db27f9c-secret-volume\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716341 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a-cert\") pod \"ingress-canary-lv65h\" (UID: \"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a\") " pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716418 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-metrics-tls\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716441 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38482018-5576-4004-8774-87c055a7e8bc-config\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716459 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-mountpoint-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716477 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksxf8\" (UniqueName: \"kubernetes.io/projected/63c64b29-5b57-481f-b4b2-c92498738c8a-kube-api-access-ksxf8\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716509 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecab2533-ce9e-4471-8a5e-749235846f79-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716526 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-proxy-tls\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716544 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472b623f-a6ec-4145-badb-4e46644c7413-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716561 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecab2533-ce9e-4471-8a5e-749235846f79-proxy-tls\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.716757 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-mountpoint-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.714823 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/63c64b29-5b57-481f-b4b2-c92498738c8a-plugins-dir\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.715874 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.717805 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a49a2742-5f89-4a17-a477-dffb8db27f9c-config-volume\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.717236 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ecab2533-ce9e-4471-8a5e-749235846f79-auth-proxy-config\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.718295 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-9l4m9\" (UID: \"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.718588 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.727116 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6369a891-1146-441f-88c0-791540d2651d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.729594 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/2c0fec9c-e9eb-4582-ae30-f34569a04270-profile-collector-cert\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.732054 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/472b623f-a6ec-4145-badb-4e46644c7413-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.738905 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.752577 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-proxy-tls\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.756125 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.760709 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ecab2533-ce9e-4471-8a5e-749235846f79-proxy-tls\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.777758 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.796810 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.813519 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ecab2533-ce9e-4471-8a5e-749235846f79-images\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.827667 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.828230 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.828329 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.828376 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.828599 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.829922 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: E1208 09:17:02.832010 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.331984146 +0000 UTC m=+146.901012256 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.833229 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.835895 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.843870 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.843905 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.855101 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.862292 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.867663 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.892937 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.895910 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.900644 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.916497 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.916775 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.929202 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/673ff0b2-abac-4934-bd97-39d8331cd3bb-signing-key\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.929876 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:02 crc kubenswrapper[4662]: E1208 09:17:02.930531 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.430508242 +0000 UTC m=+146.999536232 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.935957 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.958131 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.964451 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/673ff0b2-abac-4934-bd97-39d8331cd3bb-signing-cabundle\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.967910 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m"] Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.977283 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 08 09:17:02 crc kubenswrapper[4662]: I1208 09:17:02.997560 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 08 09:17:03 crc kubenswrapper[4662]: W1208 09:17:03.015998 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36ca08ff_e19c_4f57_aeed_6c98d844439d.slice/crio-f99dca75100510b0f59eaf25bff1ce95bcb5b9567774cbc9fd15374550caadc5 WatchSource:0}: Error finding container f99dca75100510b0f59eaf25bff1ce95bcb5b9567774cbc9fd15374550caadc5: Status 404 returned error can't find the container with id f99dca75100510b0f59eaf25bff1ce95bcb5b9567774cbc9fd15374550caadc5 Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.016873 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.024754 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.026557 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/38482018-5576-4004-8774-87c055a7e8bc-serving-cert\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.030558 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.030694 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.530675723 +0000 UTC m=+147.099703713 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.030948 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.031226 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.531218528 +0000 UTC m=+147.100246518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.039657 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.039993 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.066244 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.069854 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/38482018-5576-4004-8774-87c055a7e8bc-config\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.077972 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.098457 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.115287 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-xntqx"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.116774 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.127037 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/24432dab-77eb-4239-a3d1-a5ea2b818093-srv-cert\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.128012 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hf98q"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.133312 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.133834 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.633818484 +0000 UTC m=+147.202846474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.139230 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.140778 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-4z8qv"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.141712 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.152085 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z55dl"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.152344 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mvfcz"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.159649 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.171234 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/93b6c2c1-696e-458b-91b4-61469e7b4571-webhook-cert\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.177320 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.182839 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/93b6c2c1-696e-458b-91b4-61469e7b4571-apiservice-cert\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.185266 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xntqx" event={"ID":"7109ee21-7989-491c-8847-edacebb08704","Type":"ContainerStarted","Data":"b22310188ba53128e9a47544373b26e26d5e7778a7c6988f7bc3a30f7da50c13"} Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.186726 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53d2503b-f0bf-4fd0-81d7-fded86c71fa4-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-sgt6f\" (UID: \"53d2503b-f0bf-4fd0-81d7-fded86c71fa4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:03 crc kubenswrapper[4662]: W1208 09:17:03.187035 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2560bbb9_084c_4976_9656_373cfd6aeb69.slice/crio-373428cb7b4d270b509454e87b807accf67aa511ecb1b1095e4eaea99aed64b4 WatchSource:0}: Error finding container 373428cb7b4d270b509454e87b807accf67aa511ecb1b1095e4eaea99aed64b4: Status 404 returned error can't find the container with id 373428cb7b4d270b509454e87b807accf67aa511ecb1b1095e4eaea99aed64b4 Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.199426 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" event={"ID":"4dd686df-d808-4ff9-91ea-9ae4e81f16f9","Type":"ContainerStarted","Data":"693ae4b90cce4be710ba20be4ec280b40bebaa01334af3b0b45681d4412e87f5"} Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.199469 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" event={"ID":"4dd686df-d808-4ff9-91ea-9ae4e81f16f9","Type":"ContainerStarted","Data":"67c6bd0239d706edb689fa6f7be1bf3982b4be021f3cc9051b9ab8c119a0f4d9"} Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.199729 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.207173 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" event={"ID":"048e9b82-3d2f-4eb8-90b8-8a979951b43f","Type":"ContainerStarted","Data":"13b746f69c8c5f3ade4fdd1cd989b700dcf80112632bfd5d5a6a1010c1e312e6"} Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.207205 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" event={"ID":"048e9b82-3d2f-4eb8-90b8-8a979951b43f","Type":"ContainerStarted","Data":"4ea2694fefc0db7d7eb941ef6ed223fd6dc44a5cec508074259954a51aa4d543"} Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.213972 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" event={"ID":"36ca08ff-e19c-4f57-aeed-6c98d844439d","Type":"ContainerStarted","Data":"f99dca75100510b0f59eaf25bff1ce95bcb5b9567774cbc9fd15374550caadc5"} Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.215796 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.219542 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2c0fec9c-e9eb-4582-ae30-f34569a04270-srv-cert\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:03 crc kubenswrapper[4662]: W1208 09:17:03.222730 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9480540c_cb7a_4822_9b8d_aeb553b74ab4.slice/crio-c20460ba8661c04825913269da49b463e442fda05048e01526cbb90d20a866d8 WatchSource:0}: Error finding container c20460ba8661c04825913269da49b463e442fda05048e01526cbb90d20a866d8: Status 404 returned error can't find the container with id c20460ba8661c04825913269da49b463e442fda05048e01526cbb90d20a866d8 Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.234461 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.234491 4662 request.go:700] Waited for 1.009648345s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-qd74k&limit=500&resourceVersion=0 Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.234834 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.734822958 +0000 UTC m=+147.303850948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.239285 4662 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.255412 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.281480 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.284197 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-bvwcn"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.284626 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.298483 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.314509 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/aa6f057e-f08e-4540-ac22-5513a5e52b0a-node-bootstrap-token\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.317270 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/aa6f057e-f08e-4540-ac22-5513a5e52b0a-certs\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.334985 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-6fclc"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.335176 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.371381 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.387315 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xdtn"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.373250 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.391562 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-5v5vp"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.373113 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.373376 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.873359331 +0000 UTC m=+147.442387321 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.382268 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.382517 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.376848 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-config-volume\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.395922 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.399572 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-metrics-tls\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.403667 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.412661 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.912638538 +0000 UTC m=+147.481666528 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.422390 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.437272 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.461227 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5kmdj"] Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.463725 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.466054 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a-cert\") pod \"ingress-canary-lv65h\" (UID: \"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a\") " pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.496916 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.497330 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:03.997315088 +0000 UTC m=+147.566343078 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.515109 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/764fd12c-1de2-4dea-8616-52ea0eff48a4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-5tfg5\" (UID: \"764fd12c-1de2-4dea-8616-52ea0eff48a4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.538487 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-t4j7z\" (UID: \"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.543433 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.557433 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzcl5\" (UniqueName: \"kubernetes.io/projected/a0cf72db-464e-4859-bec6-0e3d456e10aa-kube-api-access-qzcl5\") pod \"console-f9d7485db-9lp67\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.575030 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.586127 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fksvb\" (UniqueName: \"kubernetes.io/projected/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-kube-api-access-fksvb\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.598186 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.598578 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.098563779 +0000 UTC m=+147.667591769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.614256 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqw4z\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-kube-api-access-xqw4z\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.647132 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4-bound-sa-token\") pod \"ingress-operator-5b745b69d9-54qf6\" (UID: \"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.661003 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcgcg\" (UniqueName: \"kubernetes.io/projected/adea0714-95c0-4a54-bdef-2e645836fcc0-kube-api-access-xcgcg\") pod \"openshift-controller-manager-operator-756b6f6bc6-j8tsq\" (UID: \"adea0714-95c0-4a54-bdef-2e645836fcc0\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.664275 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zj55b\" (UniqueName: \"kubernetes.io/projected/28226390-eaa7-48f5-8886-b50a88f4b37c-kube-api-access-zj55b\") pod \"router-default-5444994796-s5f92\" (UID: \"28226390-eaa7-48f5-8886-b50a88f4b37c\") " pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.667371 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.672182 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-bound-sa-token\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.696931 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lq5g\" (UniqueName: \"kubernetes.io/projected/22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a-kube-api-access-9lq5g\") pod \"ingress-canary-lv65h\" (UID: \"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a\") " pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.699732 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.699930 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.199879051 +0000 UTC m=+147.768907041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.700248 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.700582 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.200568889 +0000 UTC m=+147.769596879 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.735078 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zsrrh\" (UniqueName: \"kubernetes.io/projected/38482018-5576-4004-8774-87c055a7e8bc-kube-api-access-zsrrh\") pod \"service-ca-operator-777779d784-wk4xv\" (UID: \"38482018-5576-4004-8774-87c055a7e8bc\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.740958 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h2pv\" (UniqueName: \"kubernetes.io/projected/d958c154-0a68-4eab-94f3-2c7eb2e9d1c1-kube-api-access-2h2pv\") pod \"machine-config-controller-84d6567774-vwgdd\" (UID: \"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.764485 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccjcl\" (UniqueName: \"kubernetes.io/projected/53d2503b-f0bf-4fd0-81d7-fded86c71fa4-kube-api-access-ccjcl\") pod \"package-server-manager-789f6589d5-sgt6f\" (UID: \"53d2503b-f0bf-4fd0-81d7-fded86c71fa4\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.779938 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fg9c2\" (UniqueName: \"kubernetes.io/projected/20d06f62-0413-4b18-9e01-05932b0a663b-kube-api-access-fg9c2\") pod \"marketplace-operator-79b997595-z6pb2\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.788014 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.800517 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-lv65h" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.801493 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.802009 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.301988704 +0000 UTC m=+147.871016694 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.804854 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8b7r\" (UniqueName: \"kubernetes.io/projected/2c0fec9c-e9eb-4582-ae30-f34569a04270-kube-api-access-f8b7r\") pod \"olm-operator-6b444d44fb-5ttbx\" (UID: \"2c0fec9c-e9eb-4582-ae30-f34569a04270\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.813192 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsfr7\" (UniqueName: \"kubernetes.io/projected/6ae6a54c-1461-4e88-9ccb-7b78ee6876e7-kube-api-access-lsfr7\") pod \"dns-default-2gcvz\" (UID: \"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7\") " pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.834919 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.840460 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmpj7\" (UniqueName: \"kubernetes.io/projected/bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88-kube-api-access-dmpj7\") pod \"multus-admission-controller-857f4d67dd-dgxkn\" (UID: \"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.851784 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.851843 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" Dec 08 09:17:03 crc kubenswrapper[4662]: W1208 09:17:03.858362 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28226390_eaa7_48f5_8886_b50a88f4b37c.slice/crio-d87c01f7f219a9a3ba2e2752ea18bafec7827ae3311394764de0a174a33a38c4 WatchSource:0}: Error finding container d87c01f7f219a9a3ba2e2752ea18bafec7827ae3311394764de0a174a33a38c4: Status 404 returned error can't find the container with id d87c01f7f219a9a3ba2e2752ea18bafec7827ae3311394764de0a174a33a38c4 Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.861997 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7bf7\" (UniqueName: \"kubernetes.io/projected/6369a891-1146-441f-88c0-791540d2651d-kube-api-access-n7bf7\") pod \"kube-storage-version-migrator-operator-b67b599dd-6r9cr\" (UID: \"6369a891-1146-441f-88c0-791540d2651d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.875471 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/472b623f-a6ec-4145-badb-4e46644c7413-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-jr88j\" (UID: \"472b623f-a6ec-4145-badb-4e46644c7413\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.890204 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.890590 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.910182 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5png9\" (UniqueName: \"kubernetes.io/projected/f1264999-01bb-41ef-b2ae-9c0d7e1f4f15-kube-api-access-5png9\") pod \"migrator-59844c95c7-z5brr\" (UID: \"f1264999-01bb-41ef-b2ae-9c0d7e1f4f15\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" Dec 08 09:17:03 crc kubenswrapper[4662]: W1208 09:17:03.910320 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-db7332dcb0415eda56163695df03294394bf91c62087a1215c6c83d7fd0c33b0 WatchSource:0}: Error finding container db7332dcb0415eda56163695df03294394bf91c62087a1215c6c83d7fd0c33b0: Status 404 returned error can't find the container with id db7332dcb0415eda56163695df03294394bf91c62087a1215c6c83d7fd0c33b0 Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.910882 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:03 crc kubenswrapper[4662]: E1208 09:17:03.911316 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.411300504 +0000 UTC m=+147.980328484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.919667 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjn8c\" (UniqueName: \"kubernetes.io/projected/c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70-kube-api-access-vjn8c\") pod \"control-plane-machine-set-operator-78cbb6b69f-9l4m9\" (UID: \"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.954946 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-842qc\" (UniqueName: \"kubernetes.io/projected/aa6f057e-f08e-4540-ac22-5513a5e52b0a-kube-api-access-842qc\") pod \"machine-config-server-n5dmv\" (UID: \"aa6f057e-f08e-4540-ac22-5513a5e52b0a\") " pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.961621 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lbv\" (UniqueName: \"kubernetes.io/projected/ecab2533-ce9e-4471-8a5e-749235846f79-kube-api-access-p5lbv\") pod \"machine-config-operator-74547568cd-8gbbv\" (UID: \"ecab2533-ce9e-4471-8a5e-749235846f79\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.982644 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.983328 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j84xm\" (UniqueName: \"kubernetes.io/projected/a49a2742-5f89-4a17-a477-dffb8db27f9c-kube-api-access-j84xm\") pod \"collect-profiles-29419755-m4q6b\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:03 crc kubenswrapper[4662]: I1208 09:17:03.986195 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.014344 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.014471 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.514452326 +0000 UTC m=+148.083480316 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.014503 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.014979 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.51497192 +0000 UTC m=+148.083999910 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.015570 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvsbl\" (UniqueName: \"kubernetes.io/projected/673ff0b2-abac-4934-bd97-39d8331cd3bb-kube-api-access-tvsbl\") pod \"service-ca-9c57cc56f-djbrv\" (UID: \"673ff0b2-abac-4934-bd97-39d8331cd3bb\") " pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.019827 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksxf8\" (UniqueName: \"kubernetes.io/projected/63c64b29-5b57-481f-b4b2-c92498738c8a-kube-api-access-ksxf8\") pod \"csi-hostpathplugin-7t9mp\" (UID: \"63c64b29-5b57-481f-b4b2-c92498738c8a\") " pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.030167 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.032734 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.037847 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.044294 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvxv9\" (UniqueName: \"kubernetes.io/projected/24432dab-77eb-4239-a3d1-a5ea2b818093-kube-api-access-hvxv9\") pod \"catalog-operator-68c6474976-x52sk\" (UID: \"24432dab-77eb-4239-a3d1-a5ea2b818093\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.055002 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-n5dmv" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.062626 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.077204 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.078111 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8wsf\" (UniqueName: \"kubernetes.io/projected/93b6c2c1-696e-458b-91b4-61469e7b4571-kube-api-access-t8wsf\") pod \"packageserver-d55dfcdfc-x57s5\" (UID: \"93b6c2c1-696e-458b-91b4-61469e7b4571\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.084186 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.103964 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.104288 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.116203 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.116600 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.61658226 +0000 UTC m=+148.185610250 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.204121 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.204646 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.223540 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.224711 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.724693917 +0000 UTC m=+148.293721907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.228500 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.244795 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" event={"ID":"637eec7a-5d24-47b7-a111-ceaf0a27ebc1","Type":"ContainerStarted","Data":"5be419168653a2915c4da8b1a6f7ba73446b67fbec53e8b682b9ebee65a25ebd"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.245832 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5f92" event={"ID":"28226390-eaa7-48f5-8886-b50a88f4b37c","Type":"ContainerStarted","Data":"d87c01f7f219a9a3ba2e2752ea18bafec7827ae3311394764de0a174a33a38c4"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.302050 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.323416 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" event={"ID":"048e9b82-3d2f-4eb8-90b8-8a979951b43f","Type":"ContainerStarted","Data":"84b90420804110fdf26cf44b8fd7ee3a4a4b3e606551f1b1ba5d022f53b12325"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.325176 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.325622 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.825603518 +0000 UTC m=+148.394631518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.329257 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" event={"ID":"78eaea04-652c-4f09-b08e-8e14147da67d","Type":"ContainerStarted","Data":"70b95a181dc3b440ac67089ddad2388afa555958a8375648f3da8df35b14e2cf"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.329296 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" event={"ID":"78eaea04-652c-4f09-b08e-8e14147da67d","Type":"ContainerStarted","Data":"e60de069fa61abd2f017039e0ed83f4f1de815289b3ade45ea04eba530c53bae"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.330788 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" event={"ID":"2445bd4c-02c7-400b-bafb-a693a4da9b7f","Type":"ContainerStarted","Data":"f7ff918e4f03b37a2a90d2c503e61f75d343898aca0b73279b1ff49de3d79950"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.330810 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" event={"ID":"2445bd4c-02c7-400b-bafb-a693a4da9b7f","Type":"ContainerStarted","Data":"c3706a291e15b1cdbaa49692f4802e32acf3ef2a2e226d03a5e73d7f2cf2b6d8"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.332471 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" event={"ID":"f642958a-3ea0-4b41-81d8-6271c6403194","Type":"ContainerStarted","Data":"86f5a53ff95caa6fcb707b8009194e03c2dc4091f46a111bf1b1aa03258d70c9"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.344565 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-xntqx" event={"ID":"7109ee21-7989-491c-8847-edacebb08704","Type":"ContainerStarted","Data":"865580dcdad7cb8642a63cb14a9b356ec66bfc0b7cdff53976d7858bfa43b6b5"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.345322 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-xntqx" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.349197 4662 patch_prober.go:28] interesting pod/downloads-7954f5f757-xntqx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.349441 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xntqx" podUID="7109ee21-7989-491c-8847-edacebb08704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.353004 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" event={"ID":"5c7d890b-52b7-4d99-8fba-69dc0c154474","Type":"ContainerStarted","Data":"75f48e7b83aa170e4f5c09b460d2c49d63c881cfe85dda6f09d4f47353911ed5"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.353037 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" event={"ID":"5c7d890b-52b7-4d99-8fba-69dc0c154474","Type":"ContainerStarted","Data":"740c145b6f903c54a04431ef4f414842e3eaca9a8f7d3f5ee166d8bcdcd02da5"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.368710 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"db7332dcb0415eda56163695df03294394bf91c62087a1215c6c83d7fd0c33b0"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.417231 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"8b14c4fb522a0ccb92c839ba967a6b694cd740516225a6c74dfd494c65a64f2f"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.420709 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" event={"ID":"6e9f387b-f674-4e88-a4ba-7748c3c817d8","Type":"ContainerStarted","Data":"0a5b78038e601c66b60f8a8698ea44a0cb8f0325e54573824fec3172caa40060"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.423917 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" event={"ID":"9480540c-cb7a-4822-9b8d-aeb553b74ab4","Type":"ContainerStarted","Data":"7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.423951 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" event={"ID":"9480540c-cb7a-4822-9b8d-aeb553b74ab4","Type":"ContainerStarted","Data":"c20460ba8661c04825913269da49b463e442fda05048e01526cbb90d20a866d8"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.424677 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.426419 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.426868 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:04.926848278 +0000 UTC m=+148.495876268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.442313 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" event={"ID":"4dd686df-d808-4ff9-91ea-9ae4e81f16f9","Type":"ContainerStarted","Data":"2f9148e148c4a2216ef275fb88d2525f607c737b987083a11b9098b237417178"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.452044 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" event={"ID":"808740e2-7cff-469b-998d-a822737e748f","Type":"ContainerStarted","Data":"626c27c20bad33b91c05ea4d0866a654f0b0d4079b776321ccadb0f27cecec24"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.452106 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" event={"ID":"808740e2-7cff-469b-998d-a822737e748f","Type":"ContainerStarted","Data":"d44be886c2a85ba247699e536df50c76435a3a707cf62dfdcb96753c7ea6090f"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.466825 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" event={"ID":"2560bbb9-084c-4976-9656-373cfd6aeb69","Type":"ContainerStarted","Data":"d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.466872 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" event={"ID":"2560bbb9-084c-4976-9656-373cfd6aeb69","Type":"ContainerStarted","Data":"373428cb7b4d270b509454e87b807accf67aa511ecb1b1095e4eaea99aed64b4"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.467333 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.498534 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z"] Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.510554 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.517223 4662 generic.go:334] "Generic (PLEG): container finished" podID="f9723a58-a575-47f6-9c85-fa7a6fc65158" containerID="4a920df36bd8d1cc6e9dc9d067a1516bd2a2b91e43b7a39b9e3bd56562053ce3" exitCode=0 Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.517329 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" event={"ID":"f9723a58-a575-47f6-9c85-fa7a6fc65158","Type":"ContainerDied","Data":"4a920df36bd8d1cc6e9dc9d067a1516bd2a2b91e43b7a39b9e3bd56562053ce3"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.517354 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" event={"ID":"f9723a58-a575-47f6-9c85-fa7a6fc65158","Type":"ContainerStarted","Data":"80437bea8b349d141ecc15f22050ab273a589c5a1bdc760ff950e3e9ed4888c3"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.528588 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.530251 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.030225577 +0000 UTC m=+148.599253567 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.578853 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" event={"ID":"818d198d-782d-4af8-b2f0-0d752ecb5621","Type":"ContainerStarted","Data":"bcb72e40c14968b99dea4a213370c15091975e8f83fe875204651ddc5e1ae014"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.597273 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c4d7d457bf8432e7a48cd6fe69f7f16d8a28cb0ce061a8dd09e44b68f043d814"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.627617 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" event={"ID":"36ca08ff-e19c-4f57-aeed-6c98d844439d","Type":"ContainerStarted","Data":"255129bbbca5ea83b2412780e8ff324be21d3c1a458753559e3f94cf51afad47"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.630136 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.630463 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.130452399 +0000 UTC m=+148.699480389 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.645878 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5"] Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.671112 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" event={"ID":"99a0fc62-1689-41f8-9b21-eecb9ac81809","Type":"ContainerStarted","Data":"54068b1cf175d69a607be1683426010d552605920ea1c31dccf8f6702305e2d8"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.671160 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" event={"ID":"99a0fc62-1689-41f8-9b21-eecb9ac81809","Type":"ContainerStarted","Data":"c69b555e30c67b36fd33cdbb47b1def7c40bc1f088310d1f03ec007bbeba4345"} Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.671862 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.679900 4662 patch_prober.go:28] interesting pod/console-operator-58897d9998-5v5vp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.679948 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" podUID="99a0fc62-1689-41f8-9b21-eecb9ac81809" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.735454 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.735775 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.23575799 +0000 UTC m=+148.804785980 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.741631 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd"] Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.839486 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.839977 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.33996284 +0000 UTC m=+148.908990830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:04 crc kubenswrapper[4662]: W1208 09:17:04.847254 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda4aa0d47_d346_4a12_b9c8_f101a1ec2a7e.slice/crio-6f7405b076259d5fb26bf6d264059623b96ee465ce5f0ff65a83436085579dd7 WatchSource:0}: Error finding container 6f7405b076259d5fb26bf6d264059623b96ee465ce5f0ff65a83436085579dd7: Status 404 returned error can't find the container with id 6f7405b076259d5fb26bf6d264059623b96ee465ce5f0ff65a83436085579dd7 Dec 08 09:17:04 crc kubenswrapper[4662]: I1208 09:17:04.941292 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:04 crc kubenswrapper[4662]: E1208 09:17:04.941588 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.441575681 +0000 UTC m=+149.010603671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.044786 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.045145 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.545131614 +0000 UTC m=+149.114159604 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.131112 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-4z8qv" podStartSLOduration=130.131089189 podStartE2EDuration="2m10.131089189s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:05.11602567 +0000 UTC m=+148.685053660" watchObservedRunningTime="2025-12-08 09:17:05.131089189 +0000 UTC m=+148.700117179" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.149602 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.149937 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.64991798 +0000 UTC m=+149.218945970 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.203492 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" podStartSLOduration=130.203472645 podStartE2EDuration="2m10.203472645s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:05.203371922 +0000 UTC m=+148.772399922" watchObservedRunningTime="2025-12-08 09:17:05.203472645 +0000 UTC m=+148.772500645" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.256162 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.256554 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.756539837 +0000 UTC m=+149.325567827 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.358184 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.358540 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.858522727 +0000 UTC m=+149.427550717 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.430628 4662 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-q7vpv container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.430716 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" podUID="9480540c-cb7a-4822-9b8d-aeb553b74ab4" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.19:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.459734 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.460113 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:05.960094476 +0000 UTC m=+149.529122466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.565558 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.574687 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.074657918 +0000 UTC m=+149.643685908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.593885 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" podStartSLOduration=130.59386458 podStartE2EDuration="2m10.59386458s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:05.593259003 +0000 UTC m=+149.162286993" watchObservedRunningTime="2025-12-08 09:17:05.59386458 +0000 UTC m=+149.162892570" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.670882 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.671218 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.171207391 +0000 UTC m=+149.740235381 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.685567 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" event={"ID":"764fd12c-1de2-4dea-8616-52ea0eff48a4","Type":"ContainerStarted","Data":"b03d5bc082e00bdb517fd863e81bbc9c2d281dd27394b94420bc393f5452141f"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.698106 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"e3bc3792037b16d8bfb1da387c311ceff830bfb483199276d3fe1567a8bc664c"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.698717 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.711412 4662 generic.go:334] "Generic (PLEG): container finished" podID="f642958a-3ea0-4b41-81d8-6271c6403194" containerID="ba4e21ec6743bf561ef656c6baea1d75bba6d89f0fa33dcc6b91f08a09830fc9" exitCode=0 Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.711491 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" event={"ID":"f642958a-3ea0-4b41-81d8-6271c6403194","Type":"ContainerDied","Data":"ba4e21ec6743bf561ef656c6baea1d75bba6d89f0fa33dcc6b91f08a09830fc9"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.769545 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" event={"ID":"5c7d890b-52b7-4d99-8fba-69dc0c154474","Type":"ContainerStarted","Data":"e1229232402207913edfbf5af728f0c89d69015462daa3150e3d5a10f529973e"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.771473 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.771835 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.271821234 +0000 UTC m=+149.840849224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.805285 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" event={"ID":"637eec7a-5d24-47b7-a111-ceaf0a27ebc1","Type":"ContainerStarted","Data":"f154e72e96a6ebef80cb819cedf19691849813d86e736e1259b8489e216fe4c8"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.806456 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n5dmv" event={"ID":"aa6f057e-f08e-4540-ac22-5513a5e52b0a","Type":"ContainerStarted","Data":"f66459a745e487f6cd5092ec4aa5ab9cddf08bea670bbddd4507919c622ea0a4"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.808431 4662 generic.go:334] "Generic (PLEG): container finished" podID="6e9f387b-f674-4e88-a4ba-7748c3c817d8" containerID="b11eeafa02430b4f5a1ddaa0c32e7a70e91e33345e4d2cf79fb8ad64b851e491" exitCode=0 Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.808487 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" event={"ID":"6e9f387b-f674-4e88-a4ba-7748c3c817d8","Type":"ContainerDied","Data":"b11eeafa02430b4f5a1ddaa0c32e7a70e91e33345e4d2cf79fb8ad64b851e491"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.829114 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" event={"ID":"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1","Type":"ContainerStarted","Data":"f6ee35cb0a6dc08eb5b962a95008724d0e81ff58deccd3e0206b29864d48db0d"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.838330 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" event={"ID":"818d198d-782d-4af8-b2f0-0d752ecb5621","Type":"ContainerStarted","Data":"071e7d9b2be64e9538c4096f781758f6e53ecfb493a718badb5ec5c07a9892ab"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.839058 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.843316 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"85044df197df1bce02a79223312266da7b0a4492b13f5ac89481eed3516d4c01"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.860510 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-s5f92" event={"ID":"28226390-eaa7-48f5-8886-b50a88f4b37c","Type":"ContainerStarted","Data":"ac6a2da5e238225c94fadf474f0ddd92a5b21523a51049d95da9a7a881ff6a2f"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.862268 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" event={"ID":"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e","Type":"ContainerStarted","Data":"6f7405b076259d5fb26bf6d264059623b96ee465ce5f0ff65a83436085579dd7"} Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.875320 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.876553 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.376540788 +0000 UTC m=+149.945568778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.880655 4662 patch_prober.go:28] interesting pod/downloads-7954f5f757-xntqx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.880694 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xntqx" podUID="7109ee21-7989-491c-8847-edacebb08704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.890720 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.923534 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-qcvgk" podStartSLOduration=131.923515414 podStartE2EDuration="2m11.923515414s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:05.853482352 +0000 UTC m=+149.422510342" watchObservedRunningTime="2025-12-08 09:17:05.923515414 +0000 UTC m=+149.492543404" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.946102 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" Dec 08 09:17:05 crc kubenswrapper[4662]: I1208 09:17:05.976882 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:05 crc kubenswrapper[4662]: E1208 09:17:05.981688 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.481669954 +0000 UTC m=+150.050698004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.082472 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.083291 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.583269724 +0000 UTC m=+150.152297714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.085381 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-jmk4m" podStartSLOduration=132.085368081 podStartE2EDuration="2m12.085368081s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:05.968966909 +0000 UTC m=+149.537994899" watchObservedRunningTime="2025-12-08 09:17:06.085368081 +0000 UTC m=+149.654396071" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.186579 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.200270 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-xntqx" podStartSLOduration=131.200252712 podStartE2EDuration="2m11.200252712s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:06.199051019 +0000 UTC m=+149.768079029" watchObservedRunningTime="2025-12-08 09:17:06.200252712 +0000 UTC m=+149.769280702" Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.207859 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.707827328 +0000 UTC m=+150.276855318 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.211439 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z6pb2"] Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.301584 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.302185 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.802172221 +0000 UTC m=+150.371200211 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.347811 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-8jcgg" podStartSLOduration=132.34779592 podStartE2EDuration="2m12.34779592s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:06.338464656 +0000 UTC m=+149.907492646" watchObservedRunningTime="2025-12-08 09:17:06.34779592 +0000 UTC m=+149.916823910" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.402457 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.402763 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:06.902733322 +0000 UTC m=+150.471761312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.432958 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-crzkv" podStartSLOduration=131.432943253 podStartE2EDuration="2m11.432943253s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:06.432357077 +0000 UTC m=+150.001385067" watchObservedRunningTime="2025-12-08 09:17:06.432943253 +0000 UTC m=+150.001971243" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.508278 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.508837 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.008825194 +0000 UTC m=+150.577853184 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.536098 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-5v5vp" podStartSLOduration=132.536083705 podStartE2EDuration="2m12.536083705s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:06.530991606 +0000 UTC m=+150.100019596" watchObservedRunningTime="2025-12-08 09:17:06.536083705 +0000 UTC m=+150.105111695" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.617586 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.617913 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.117899257 +0000 UTC m=+150.686927247 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.619104 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.637419 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j"] Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.668334 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.688142 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:06 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:06 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:06 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.688190 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.728081 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-bvwcn" podStartSLOduration=132.728062209 podStartE2EDuration="2m12.728062209s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:06.700568042 +0000 UTC m=+150.269596032" watchObservedRunningTime="2025-12-08 09:17:06.728062209 +0000 UTC m=+150.297090199" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.732566 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.733733 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.233722292 +0000 UTC m=+150.802750282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.752247 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-9lp67"] Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.769184 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-dgxkn"] Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.835237 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.835488 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.335474026 +0000 UTC m=+150.904502016 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.900882 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-n5dmv" podStartSLOduration=5.900862243 podStartE2EDuration="5.900862243s" podCreationTimestamp="2025-12-08 09:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:06.888478986 +0000 UTC m=+150.457506966" watchObservedRunningTime="2025-12-08 09:17:06.900862243 +0000 UTC m=+150.469890233" Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.925659 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" event={"ID":"20d06f62-0413-4b18-9e01-05932b0a663b","Type":"ContainerStarted","Data":"9b527b59e4bd4387224aade56f8f9d323075efd4606bb41af8f937f1bfa75b24"} Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.935957 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-n5dmv" event={"ID":"aa6f057e-f08e-4540-ac22-5513a5e52b0a","Type":"ContainerStarted","Data":"37a19e0bd78ba0d6990853c199bf66c2d6929d15755d761d1d6a68e622726816"} Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.937326 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:06 crc kubenswrapper[4662]: E1208 09:17:06.937709 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.437693943 +0000 UTC m=+151.006721933 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.940681 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" event={"ID":"a4aa0d47-d346-4a12-b9c8-f101a1ec2a7e","Type":"ContainerStarted","Data":"21e40e51f4198fa883c28b493edfa39ad8407a7a0b9c698cfaadc40992656f3d"} Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.951161 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" event={"ID":"764fd12c-1de2-4dea-8616-52ea0eff48a4","Type":"ContainerStarted","Data":"d7f5de795c71ba2547287785817479b105431171f6ad4e9c1449115690630b16"} Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.959339 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lp67" event={"ID":"a0cf72db-464e-4859-bec6-0e3d456e10aa","Type":"ContainerStarted","Data":"e5cb5965f76e45df9c3bd0aa0f71dffe1876f2505b752e348eaa0645c831573e"} Dec 08 09:17:06 crc kubenswrapper[4662]: I1208 09:17:06.973258 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f"] Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.028179 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" event={"ID":"472b623f-a6ec-4145-badb-4e46644c7413","Type":"ContainerStarted","Data":"0c8f948475cee80df2a7a8de956334ae9dcdb84dfccaff0eca853bab8e255666"} Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.039126 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.041466 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.541445801 +0000 UTC m=+151.110473791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.050558 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" event={"ID":"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88","Type":"ContainerStarted","Data":"60169f0bd4cfa3c7e814afdc9a1435e8f547498a68c95654d55584ab3f03bcfe"} Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.064781 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv"] Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.132502 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" event={"ID":"637eec7a-5d24-47b7-a111-ceaf0a27ebc1","Type":"ContainerStarted","Data":"3b2807f55b73837c6ed1de3a641c9eb67fb9cff6e2d0463397c565f1b519ead4"} Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.140073 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mvfcz" podStartSLOduration=132.14005458 podStartE2EDuration="2m12.14005458s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:07.088206112 +0000 UTC m=+150.657234102" watchObservedRunningTime="2025-12-08 09:17:07.14005458 +0000 UTC m=+150.709082560" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.141317 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.141668 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.641657444 +0000 UTC m=+151.210685434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.203440 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" podStartSLOduration=133.203424172 podStartE2EDuration="2m13.203424172s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:07.202133546 +0000 UTC m=+150.771161536" watchObservedRunningTime="2025-12-08 09:17:07.203424172 +0000 UTC m=+150.772452162" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.208147 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-lv65h"] Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.239456 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" event={"ID":"f9723a58-a575-47f6-9c85-fa7a6fc65158","Type":"ContainerStarted","Data":"1f72b9a7223d804a46a1ddbc4d2a805909619bdbd7a10fa86820ea58e1f54506"} Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.239596 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.245130 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.245452 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.745437983 +0000 UTC m=+151.314465973 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: W1208 09:17:07.262257 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22c0f733_1b08_4cb8_a507_3d6f3bb8cf8a.slice/crio-badf422d6a0f02269d06e7972818bfc4d33fef0a09d70b499be624117a68a78b WatchSource:0}: Error finding container badf422d6a0f02269d06e7972818bfc4d33fef0a09d70b499be624117a68a78b: Status 404 returned error can't find the container with id badf422d6a0f02269d06e7972818bfc4d33fef0a09d70b499be624117a68a78b Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.270057 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"54c37821658018c96d1ac53015cd5ae79e4fbb854779eeb4220660ffbd4f8738"} Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.328854 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" event={"ID":"6e9f387b-f674-4e88-a4ba-7748c3c817d8","Type":"ContainerStarted","Data":"8dda26858f0a3b484a5eeac55acb125a58049613e21b6011bdfbea0aaad8a04a"} Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.339854 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-s5f92" podStartSLOduration=132.339839417 podStartE2EDuration="2m12.339839417s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:07.308188787 +0000 UTC m=+150.877216787" watchObservedRunningTime="2025-12-08 09:17:07.339839417 +0000 UTC m=+150.908867407" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.351645 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.353171 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.853159449 +0000 UTC m=+151.422187439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.374058 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.374416 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.389925 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" event={"ID":"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1","Type":"ContainerStarted","Data":"9b3893cc2c01e7cb1359a1beac4199389bf09d5dacc1e6b3b4bdd2df3f7eab84"} Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.397005 4662 patch_prober.go:28] interesting pod/downloads-7954f5f757-xntqx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.397056 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xntqx" podUID="7109ee21-7989-491c-8847-edacebb08704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.412234 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5kmdj" podStartSLOduration=132.412202983 podStartE2EDuration="2m12.412202983s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:07.402946531 +0000 UTC m=+150.971974521" watchObservedRunningTime="2025-12-08 09:17:07.412202983 +0000 UTC m=+150.981230973" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.418208 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6"] Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.455278 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.457850 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:07.957832722 +0000 UTC m=+151.526860712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.570373 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.601594 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.101572277 +0000 UTC m=+151.670600267 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.642676 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-5tfg5" podStartSLOduration=132.642658393 podStartE2EDuration="2m12.642658393s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:07.610395637 +0000 UTC m=+151.179423627" watchObservedRunningTime="2025-12-08 09:17:07.642658393 +0000 UTC m=+151.211686383" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.677803 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr"] Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.678886 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.682575 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:07 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:07 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:07 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.682807 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.688522 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.188498268 +0000 UTC m=+151.757526258 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.688659 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.689095 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.189085174 +0000 UTC m=+151.758113164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.792884 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.793585 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.293567332 +0000 UTC m=+151.862595322 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.894471 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.894868 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.394857494 +0000 UTC m=+151.963885484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.911185 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-t4j7z" podStartSLOduration=132.911166727 podStartE2EDuration="2m12.911166727s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:07.743357348 +0000 UTC m=+151.312385368" watchObservedRunningTime="2025-12-08 09:17:07.911166727 +0000 UTC m=+151.480194717" Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.913726 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr"] Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.932627 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv"] Dec 08 09:17:07 crc kubenswrapper[4662]: I1208 09:17:07.996172 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:07 crc kubenswrapper[4662]: E1208 09:17:07.996524 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.496509475 +0000 UTC m=+152.065537465 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.002491 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.050339 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" podStartSLOduration=133.050320997 podStartE2EDuration="2m13.050320997s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:08.026258083 +0000 UTC m=+151.595286063" watchObservedRunningTime="2025-12-08 09:17:08.050320997 +0000 UTC m=+151.619348987" Dec 08 09:17:08 crc kubenswrapper[4662]: W1208 09:17:08.075812 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc47e99e0_a11e_4b1f_a6c1_f9ec2d3e4d70.slice/crio-f15849a8dd2f6b2b2667d9741f2931496ff5b993fedc78e99cfceeae1effd96d WatchSource:0}: Error finding container f15849a8dd2f6b2b2667d9741f2931496ff5b993fedc78e99cfceeae1effd96d: Status 404 returned error can't find the container with id f15849a8dd2f6b2b2667d9741f2931496ff5b993fedc78e99cfceeae1effd96d Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.099505 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.099869 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.599857633 +0000 UTC m=+152.168885623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.129406 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" podStartSLOduration=134.129391635 podStartE2EDuration="2m14.129391635s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:08.128776348 +0000 UTC m=+151.697804338" watchObservedRunningTime="2025-12-08 09:17:08.129391635 +0000 UTC m=+151.698419625" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.201603 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.201972 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.701957906 +0000 UTC m=+152.270985896 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.260628 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-2gcvz"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.300627 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.305541 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.305818 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.805807307 +0000 UTC m=+152.374835297 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.313410 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" podStartSLOduration=133.313389643 podStartE2EDuration="2m13.313389643s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:08.313059014 +0000 UTC m=+151.882087004" watchObservedRunningTime="2025-12-08 09:17:08.313389643 +0000 UTC m=+151.882417633" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.386784 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.406300 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.407122 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:08.907096139 +0000 UTC m=+152.476124169 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.422365 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7t9mp"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.423188 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" event={"ID":"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70","Type":"ContainerStarted","Data":"f15849a8dd2f6b2b2667d9741f2931496ff5b993fedc78e99cfceeae1effd96d"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.481776 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.507284 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.507732 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.007718632 +0000 UTC m=+152.576746622 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.567989 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" event={"ID":"f642958a-3ea0-4b41-81d8-6271c6403194","Type":"ContainerStarted","Data":"52aaa16d3c376b5e5fd324972a3a3224836a826061a52dd783d33747986d12f6"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.603731 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" event={"ID":"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4","Type":"ContainerStarted","Data":"cd7479f06cf0a9717e24bd2e8eeab44cb984902e96a0c3ae29179605a872b091"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.608660 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.608799 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.108776797 +0000 UTC m=+152.677804787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.608890 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.609239 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.109228679 +0000 UTC m=+152.678256669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.610157 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2gcvz" event={"ID":"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7","Type":"ContainerStarted","Data":"1bec34a0cb14c9503c2ca42de88ff382a1d218f028fbf685b1eb508b36b2f7c8"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.620250 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lp67" event={"ID":"a0cf72db-464e-4859-bec6-0e3d456e10aa","Type":"ContainerStarted","Data":"2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.650952 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" event={"ID":"472b623f-a6ec-4145-badb-4e46644c7413","Type":"ContainerStarted","Data":"35c1056b7e437b37caf6e53aca4f681b71777a11acd616b66e7fb1225d560361"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.672661 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-9lp67" podStartSLOduration=134.672642242 podStartE2EDuration="2m14.672642242s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:08.672251651 +0000 UTC m=+152.241279641" watchObservedRunningTime="2025-12-08 09:17:08.672642242 +0000 UTC m=+152.241670232" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.681826 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" event={"ID":"a49a2742-5f89-4a17-a477-dffb8db27f9c","Type":"ContainerStarted","Data":"e18e108294a69027d7d1984686ccbcaf96ce9f76841f8a135b5dd88f1b49c849"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.682335 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.695969 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:08 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:08 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:08 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.696021 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.739907 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.741509 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.241460571 +0000 UTC m=+152.810488591 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.751922 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-jr88j" podStartSLOduration=133.751888634 podStartE2EDuration="2m13.751888634s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:08.735419467 +0000 UTC m=+152.304447457" watchObservedRunningTime="2025-12-08 09:17:08.751888634 +0000 UTC m=+152.320916644" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.754150 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.754809 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.254788493 +0000 UTC m=+152.823816483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.797779 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.797816 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-djbrv"] Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.799030 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-vwgdd" event={"ID":"d958c154-0a68-4eab-94f3-2c7eb2e9d1c1","Type":"ContainerStarted","Data":"4cc19b8d52b7936a442280d5907997863031724818f06600a06661ee78ecb607"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.840949 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" event={"ID":"53d2503b-f0bf-4fd0-81d7-fded86c71fa4","Type":"ContainerStarted","Data":"a36724d7e15b77443288c8844a7700c00213a97d3ccc5648ff5eb832209f2f58"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.841001 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" event={"ID":"53d2503b-f0bf-4fd0-81d7-fded86c71fa4","Type":"ContainerStarted","Data":"5175527a5e71c1e5efd77b8b44db148cee8296aaf794bc46daf62dca524a5f12"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.841051 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.845994 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.862290 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.863379 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.363361193 +0000 UTC m=+152.932389193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.892966 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lv65h" event={"ID":"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a","Type":"ContainerStarted","Data":"a6fe7c30ff3c16c65bdea7303a4d64b1864a6676eb4eedafb30536a8d13c6212"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.893028 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-lv65h" event={"ID":"22c0f733-1b08-4cb8-a507-3d6f3bb8cf8a","Type":"ContainerStarted","Data":"badf422d6a0f02269d06e7972818bfc4d33fef0a09d70b499be624117a68a78b"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.905432 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" podStartSLOduration=133.905414415 podStartE2EDuration="2m13.905414415s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:08.905071306 +0000 UTC m=+152.474099286" watchObservedRunningTime="2025-12-08 09:17:08.905414415 +0000 UTC m=+152.474442405" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.919556 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" event={"ID":"f1264999-01bb-41ef-b2ae-9c0d7e1f4f15","Type":"ContainerStarted","Data":"db0259d376cbf1baf3b64a2e17ac524327cf3cc118641b739f6f5f709395bc29"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.950400 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" event={"ID":"38482018-5576-4004-8774-87c055a7e8bc","Type":"ContainerStarted","Data":"7a6f2b4a1c5d3954f6d5ca0e6d1f0678bee36cecf35799b2a8f107eb3d95a6d0"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.950449 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" event={"ID":"38482018-5576-4004-8774-87c055a7e8bc","Type":"ContainerStarted","Data":"ce487a3527cd04725f66de55b5a43f5aca7253d1128a37cbec5b8078689d03e8"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.963850 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" event={"ID":"6369a891-1146-441f-88c0-791540d2651d","Type":"ContainerStarted","Data":"492f135c528e82ff5e3f7ec2f36bf5b5ac8f8bb248849e70e1a9b535b3023046"} Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.968596 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:08 crc kubenswrapper[4662]: E1208 09:17:08.969779 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.469763613 +0000 UTC m=+153.038791603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.980615 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-lv65h" podStartSLOduration=7.980597737 podStartE2EDuration="7.980597737s" podCreationTimestamp="2025-12-08 09:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:08.979018274 +0000 UTC m=+152.548046264" watchObservedRunningTime="2025-12-08 09:17:08.980597737 +0000 UTC m=+152.549625727" Dec 08 09:17:08 crc kubenswrapper[4662]: I1208 09:17:08.985095 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" event={"ID":"ecab2533-ce9e-4471-8a5e-749235846f79","Type":"ContainerStarted","Data":"627a96b814fa6562d4d2b7f731640e0685a3665ab0cad2b7afb3dd8ba9d1d826"} Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.004520 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" event={"ID":"20d06f62-0413-4b18-9e01-05932b0a663b","Type":"ContainerStarted","Data":"6632918a8e2281c149ff9533208260b6ee78ca5d2bd8b60f3f18623c8d7e74cd"} Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.004565 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.006059 4662 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-z6pb2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.006095 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" podUID="20d06f62-0413-4b18-9e01-05932b0a663b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.020435 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-fbwnt" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.045859 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-wk4xv" podStartSLOduration=134.04584426 podStartE2EDuration="2m14.04584426s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:09.006335366 +0000 UTC m=+152.575363356" watchObservedRunningTime="2025-12-08 09:17:09.04584426 +0000 UTC m=+152.614872250" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.069567 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.070926 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.57090427 +0000 UTC m=+153.139932260 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.078099 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" podStartSLOduration=134.078085415 podStartE2EDuration="2m14.078085415s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:09.047990318 +0000 UTC m=+152.617018308" watchObservedRunningTime="2025-12-08 09:17:09.078085415 +0000 UTC m=+152.647113405" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.172128 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.172960 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.672942412 +0000 UTC m=+153.241970412 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.273634 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.274050 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.774030728 +0000 UTC m=+153.343058718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.330706 4662 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-z55dl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.331155 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" podUID="f9723a58-a575-47f6-9c85-fa7a6fc65158" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.331869 4662 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-z55dl container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.331894 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" podUID="f9723a58-a575-47f6-9c85-fa7a6fc65158" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.375693 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.376128 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.876112801 +0000 UTC m=+153.445140791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.477603 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.477951 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:09.977936787 +0000 UTC m=+153.546964767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.585210 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.585542 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.08552756 +0000 UTC m=+153.654555540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.674721 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:09 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:09 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:09 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.674994 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.687339 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.687512 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.187487819 +0000 UTC m=+153.756515799 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.687539 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.687844 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.187835269 +0000 UTC m=+153.756863259 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.790283 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.790433 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.290407565 +0000 UTC m=+153.859435555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.790479 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.790989 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.290981791 +0000 UTC m=+153.860009781 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.892135 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.892260 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.392243812 +0000 UTC m=+153.961271802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.892322 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.892636 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.392626492 +0000 UTC m=+153.961654482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.993077 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.993282 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.493256236 +0000 UTC m=+154.062284226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:09 crc kubenswrapper[4662]: I1208 09:17:09.993348 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:09 crc kubenswrapper[4662]: E1208 09:17:09.993645 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.493633406 +0000 UTC m=+154.062661396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.008892 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" event={"ID":"f642958a-3ea0-4b41-81d8-6271c6403194","Type":"ContainerStarted","Data":"c8fcf02e632a7bac22b2e2dba360527c8f25921e8c0f9a15a181e27153e15b76"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.011231 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" event={"ID":"53d2503b-f0bf-4fd0-81d7-fded86c71fa4","Type":"ContainerStarted","Data":"7a743563e6596344840063d1bc361e8f466bd141abde785046a22f56ca6a46c5"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.012922 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" event={"ID":"63c64b29-5b57-481f-b4b2-c92498738c8a","Type":"ContainerStarted","Data":"4129e8d7b8fa53fb1311d01ba26415d7e1a4452a0e370ac95a0e5b62fd20d3ba"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.014663 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" event={"ID":"93b6c2c1-696e-458b-91b4-61469e7b4571","Type":"ContainerStarted","Data":"64374024e76cfeaf1ec6ef1eb59fbc6bf62bb60c000e6c7e662da6941c0d511a"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.014692 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" event={"ID":"93b6c2c1-696e-458b-91b4-61469e7b4571","Type":"ContainerStarted","Data":"8dd9e7317f0bee4e179c8c579e079f3325c4bad9a77e9dad665ace43dd578f80"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.014880 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.019844 4662 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x57s5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" start-of-body= Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.019898 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" podUID="93b6c2c1-696e-458b-91b4-61469e7b4571" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": dial tcp 10.217.0.38:5443: connect: connection refused" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.030487 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" event={"ID":"673ff0b2-abac-4934-bd97-39d8331cd3bb","Type":"ContainerStarted","Data":"226b07a786df45d7d18c9313c2c6149360fad8871f30d33fb7fa969392a41872"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.030549 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" event={"ID":"673ff0b2-abac-4934-bd97-39d8331cd3bb","Type":"ContainerStarted","Data":"0030a7e060a377cd9708ae0706cd0205544321500841c394588036aa684d2bf5"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.032648 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" event={"ID":"2c0fec9c-e9eb-4582-ae30-f34569a04270","Type":"ContainerStarted","Data":"66f0e1df51cfe8f66f2c814c9d29b03002f4b115cabfbe4e9ba78636dcdd461e"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.032686 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" event={"ID":"2c0fec9c-e9eb-4582-ae30-f34569a04270","Type":"ContainerStarted","Data":"b04dda889cb03c404cefc58a34b3a64ff0d1465c0b2b97b80b2ed72261b4589a"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.034435 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.035388 4662 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-5ttbx container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" start-of-body= Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.035434 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" podUID="2c0fec9c-e9eb-4582-ae30-f34569a04270" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.41:8443/healthz\": dial tcp 10.217.0.41:8443: connect: connection refused" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.059074 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" event={"ID":"f1264999-01bb-41ef-b2ae-9c0d7e1f4f15","Type":"ContainerStarted","Data":"59388fdbbb7986af4c07ec5c4da39308e1b1cb7cf65879381158f4d292a17291"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.059120 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" event={"ID":"f1264999-01bb-41ef-b2ae-9c0d7e1f4f15","Type":"ContainerStarted","Data":"c3ea7b65362df7f88748d7771d212bc321d6a185c75afe3bf4b8442ee3e1e0f7"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.068471 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" event={"ID":"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4","Type":"ContainerStarted","Data":"5c27115b0b91cf1ef91e1612cd785d3b091383da562c506aa466ff1dd1b6cf30"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.068789 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" event={"ID":"64de8c2c-e0f0-430f-9687-f8bfe9e9e1b4","Type":"ContainerStarted","Data":"1d7c94cf219f5c5552c52550c618a45c4fc96cca0beca17c0e27b67c58762338"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.071319 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" event={"ID":"adea0714-95c0-4a54-bdef-2e645836fcc0","Type":"ContainerStarted","Data":"29430ecef39e3374adabb6087c6a7e9221781ac40d97d37267430ad8f6d68b98"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.071439 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" event={"ID":"adea0714-95c0-4a54-bdef-2e645836fcc0","Type":"ContainerStarted","Data":"d48a7d2e5c841abadffd9872a067c9a5b99ad7af9cb2306ac49729d9296ccd79"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.088457 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" podStartSLOduration=136.088439321 podStartE2EDuration="2m16.088439321s" podCreationTimestamp="2025-12-08 09:14:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.061728225 +0000 UTC m=+153.630756215" watchObservedRunningTime="2025-12-08 09:17:10.088439321 +0000 UTC m=+153.657467311" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.094107 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" event={"ID":"ecab2533-ce9e-4471-8a5e-749235846f79","Type":"ContainerStarted","Data":"65fb251a25725acd5de51f602f70e359b924d0f176791b873f9b0197b7aabacd"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.094359 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" event={"ID":"ecab2533-ce9e-4471-8a5e-749235846f79","Type":"ContainerStarted","Data":"a4abce6b13312ef3c885ff09287af0992244ac8c1b797c98fbee181b8656e68c"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.094852 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.095008 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.594987909 +0000 UTC m=+154.164015899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.095288 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.097146 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.597134017 +0000 UTC m=+154.166162107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.110708 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" event={"ID":"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88","Type":"ContainerStarted","Data":"425d0532719847b73665fce98ce0e6bb0cdb4ff6ff48f949cb82a6fd9e8b630d"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.110780 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" event={"ID":"bfa9f1b7-d7ab-4f0d-9988-4e929cfc0c88","Type":"ContainerStarted","Data":"76714ff7ea992cc1842b8039005200a4560d7c5b4117be79bdeba4bd256f4086"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.116052 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" event={"ID":"24432dab-77eb-4239-a3d1-a5ea2b818093","Type":"ContainerStarted","Data":"fffb53bb6023b3c319b5a311916c228a6ae492472e94f65d88bcc61e9faa2815"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.116091 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" event={"ID":"24432dab-77eb-4239-a3d1-a5ea2b818093","Type":"ContainerStarted","Data":"7d3d60bff851f86c94db6090ffb505a43b640f1cd9336baa8e1d96d1032bc535"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.116845 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.117990 4662 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-x52sk container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" start-of-body= Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.118034 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" podUID="24432dab-77eb-4239-a3d1-a5ea2b818093" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.28:8443/healthz\": dial tcp 10.217.0.28:8443: connect: connection refused" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.124898 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" podStartSLOduration=135.124885751 podStartE2EDuration="2m15.124885751s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.124187502 +0000 UTC m=+153.693215492" watchObservedRunningTime="2025-12-08 09:17:10.124885751 +0000 UTC m=+153.693913741" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.126287 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-z5brr" podStartSLOduration=135.126281819 podStartE2EDuration="2m15.126281819s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.094480335 +0000 UTC m=+153.663508325" watchObservedRunningTime="2025-12-08 09:17:10.126281819 +0000 UTC m=+153.695309799" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.130549 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" event={"ID":"a49a2742-5f89-4a17-a477-dffb8db27f9c","Type":"ContainerStarted","Data":"19eab07c52b62e4f105b59cdb1761238b3f2a48c0247136f75d7e73c7b210e38"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.138055 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" event={"ID":"6369a891-1146-441f-88c0-791540d2651d","Type":"ContainerStarted","Data":"014e043315b0d7610e01d6a14ed264784cc420d1569f8cfe15c295bc1937f577"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.146784 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" event={"ID":"c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70","Type":"ContainerStarted","Data":"f35306c9c83638e16d593bf4d96a190d0512fd84df05f3fe77b4b755e254a508"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.154242 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" podStartSLOduration=135.154229898 podStartE2EDuration="2m15.154229898s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.153163159 +0000 UTC m=+153.722191149" watchObservedRunningTime="2025-12-08 09:17:10.154229898 +0000 UTC m=+153.723257888" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.166582 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2gcvz" event={"ID":"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7","Type":"ContainerStarted","Data":"9449ad2b92e1bb3b44c646704279045e87096e0a2e66a1c14e59ef0487a36bc0"} Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.166636 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.168958 4662 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-z6pb2 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" start-of-body= Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.168992 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" podUID="20d06f62-0413-4b18-9e01-05932b0a663b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.35:8080/healthz\": dial tcp 10.217.0.35:8080: connect: connection refused" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.192193 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-djbrv" podStartSLOduration=135.192180569 podStartE2EDuration="2m15.192180569s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.191332646 +0000 UTC m=+153.760360626" watchObservedRunningTime="2025-12-08 09:17:10.192180569 +0000 UTC m=+153.761208559" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.196194 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.197368 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.69734791 +0000 UTC m=+154.266375950 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.214410 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-j8tsq" podStartSLOduration=135.214392303 podStartE2EDuration="2m15.214392303s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.213152279 +0000 UTC m=+153.782180269" watchObservedRunningTime="2025-12-08 09:17:10.214392303 +0000 UTC m=+153.783420293" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.247477 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-9l4m9" podStartSLOduration=135.24745916 podStartE2EDuration="2m15.24745916s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.245227029 +0000 UTC m=+153.814255019" watchObservedRunningTime="2025-12-08 09:17:10.24745916 +0000 UTC m=+153.816487150" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.272518 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-dgxkn" podStartSLOduration=135.27249958 podStartE2EDuration="2m15.27249958s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.270140076 +0000 UTC m=+153.839168066" watchObservedRunningTime="2025-12-08 09:17:10.27249958 +0000 UTC m=+153.841527580" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.292749 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-8gbbv" podStartSLOduration=135.292718769 podStartE2EDuration="2m15.292718769s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.292654527 +0000 UTC m=+153.861682517" watchObservedRunningTime="2025-12-08 09:17:10.292718769 +0000 UTC m=+153.861746749" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.299060 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.300045 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.800030718 +0000 UTC m=+154.369058708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.332417 4662 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-z55dl container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.332485 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" podUID="f9723a58-a575-47f6-9c85-fa7a6fc65158" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.381892 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" podStartSLOduration=130.381871891 podStartE2EDuration="2m10.381871891s" podCreationTimestamp="2025-12-08 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.358003263 +0000 UTC m=+153.927031253" watchObservedRunningTime="2025-12-08 09:17:10.381871891 +0000 UTC m=+153.950899881" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.400094 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.400426 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:10.900406655 +0000 UTC m=+154.469434635 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.502084 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.502465 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.002453737 +0000 UTC m=+154.571481727 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.513044 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-2gcvz" podStartSLOduration=9.513026444 podStartE2EDuration="9.513026444s" podCreationTimestamp="2025-12-08 09:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.453300161 +0000 UTC m=+154.022328151" watchObservedRunningTime="2025-12-08 09:17:10.513026444 +0000 UTC m=+154.082054434" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.513403 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" podStartSLOduration=135.513398944 podStartE2EDuration="2m15.513398944s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.509857638 +0000 UTC m=+154.078885628" watchObservedRunningTime="2025-12-08 09:17:10.513398944 +0000 UTC m=+154.082426934" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.603010 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.603597 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.103582524 +0000 UTC m=+154.672610514 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.621331 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-6r9cr" podStartSLOduration=135.621314355 podStartE2EDuration="2m15.621314355s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.620096602 +0000 UTC m=+154.189124592" watchObservedRunningTime="2025-12-08 09:17:10.621314355 +0000 UTC m=+154.190342345" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.621623 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-54qf6" podStartSLOduration=135.621617874 podStartE2EDuration="2m15.621617874s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:10.548913909 +0000 UTC m=+154.117941899" watchObservedRunningTime="2025-12-08 09:17:10.621617874 +0000 UTC m=+154.190645864" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.671220 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:10 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:10 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:10 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.671268 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.705201 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.705643 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.205611455 +0000 UTC m=+154.774639445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.806332 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.806528 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.306503236 +0000 UTC m=+154.875531226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.806655 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.807001 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.306992169 +0000 UTC m=+154.876020239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.907370 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.907555 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.40753022 +0000 UTC m=+154.976558210 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:10 crc kubenswrapper[4662]: I1208 09:17:10.907657 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:10 crc kubenswrapper[4662]: E1208 09:17:10.907914 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.40790379 +0000 UTC m=+154.976931780 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.008763 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.008951 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.508918124 +0000 UTC m=+155.077946114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.009003 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.009293 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.509282444 +0000 UTC m=+155.078310434 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.110166 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.110325 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.610298858 +0000 UTC m=+155.179326848 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.110408 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.110770 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.610762421 +0000 UTC m=+155.179790411 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.171571 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" event={"ID":"63c64b29-5b57-481f-b4b2-c92498738c8a","Type":"ContainerStarted","Data":"b3cd4c8a47fe726d8fc2472968f7f227d692869ec0e9d7f58b3154e4747dec2f"} Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.177082 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-2gcvz" event={"ID":"6ae6a54c-1461-4e88-9ccb-7b78ee6876e7","Type":"ContainerStarted","Data":"a42a426b711c88ea5cbba05dac0bc0f2aa3e2eb8075ced1efea2fb255bf1c2cb"} Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.211844 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.212019 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.711991541 +0000 UTC m=+155.281019531 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.212150 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.212482 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.712470024 +0000 UTC m=+155.281498014 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.239841 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-5ttbx" Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.243266 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-x52sk" Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.313683 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.313792 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.813774746 +0000 UTC m=+155.382802736 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.314177 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.316063 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.816049387 +0000 UTC m=+155.385077377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.393564 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z55dl" Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.416170 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.416503 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.916465935 +0000 UTC m=+155.485493955 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.416572 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.417044 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:11.917035881 +0000 UTC m=+155.486063871 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.517623 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.518685 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.018669502 +0000 UTC m=+155.587697492 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.619150 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.619464 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.119451369 +0000 UTC m=+155.688479359 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.675264 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:11 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:11 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:11 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.675598 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.719929 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.720086 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.220067652 +0000 UTC m=+155.789095642 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.720200 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.720539 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.220529835 +0000 UTC m=+155.789557825 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.821281 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.821655 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.321629151 +0000 UTC m=+155.890657141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:11 crc kubenswrapper[4662]: I1208 09:17:11.922443 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:11 crc kubenswrapper[4662]: E1208 09:17:11.922836 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.42281904 +0000 UTC m=+155.991847080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.023302 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.023463 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.523439243 +0000 UTC m=+156.092467233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.023616 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.023908 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.523896436 +0000 UTC m=+156.092924426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.124770 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.124966 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.624941571 +0000 UTC m=+156.193969581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.125349 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.125685 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.62566974 +0000 UTC m=+156.194697720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.178528 4662 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-x57s5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.178588 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" podUID="93b6c2c1-696e-458b-91b4-61469e7b4571" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.38:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.189631 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" event={"ID":"63c64b29-5b57-481f-b4b2-c92498738c8a","Type":"ContainerStarted","Data":"1b210e5eaa84eb0b29e2d70621508d3904db1256e56cd920589e63adf6fd32e0"} Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.226562 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.226695 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.726670544 +0000 UTC m=+156.295698534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.226874 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.227280 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.72726474 +0000 UTC m=+156.296292730 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.306849 4662 patch_prober.go:28] interesting pod/downloads-7954f5f757-xntqx container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.306912 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-xntqx" podUID="7109ee21-7989-491c-8847-edacebb08704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.307507 4662 patch_prober.go:28] interesting pod/downloads-7954f5f757-xntqx container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.307570 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-xntqx" podUID="7109ee21-7989-491c-8847-edacebb08704" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.327449 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.328413 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.828357356 +0000 UTC m=+156.397385346 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.429407 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.429844 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:12.929825933 +0000 UTC m=+156.498853983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.444883 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4vsqx"] Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.446038 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.452466 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.467920 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4vsqx"] Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.530814 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.531130 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnnzk\" (UniqueName: \"kubernetes.io/projected/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-kube-api-access-wnnzk\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.531184 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-utilities\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.531201 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-catalog-content\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.531306 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.031292199 +0000 UTC m=+156.600320179 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.623062 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.623116 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.632602 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5kv9l"] Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.632767 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.632834 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-utilities\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.632868 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-catalog-content\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.633193 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.133172506 +0000 UTC m=+156.702200526 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.633493 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.633575 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-utilities\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.633950 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnnzk\" (UniqueName: \"kubernetes.io/projected/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-kube-api-access-wnnzk\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.633960 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-catalog-content\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.635981 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.645815 4662 patch_prober.go:28] interesting pod/apiserver-76f77b778f-6fclc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]log ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]etcd ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/start-apiserver-admission-initializer ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/generic-apiserver-start-informers ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/max-in-flight-filter ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/storage-object-count-tracker-hook ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/image.openshift.io-apiserver-caches ok Dec 08 09:17:12 crc kubenswrapper[4662]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Dec 08 09:17:12 crc kubenswrapper[4662]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/project.openshift.io-projectcache ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/openshift.io-startinformers ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/openshift.io-restmapperupdater ok Dec 08 09:17:12 crc kubenswrapper[4662]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Dec 08 09:17:12 crc kubenswrapper[4662]: livez check failed Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.646358 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" podUID="f642958a-3ea0-4b41-81d8-6271c6403194" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.655027 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kv9l"] Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.682137 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:12 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:12 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:12 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.682534 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.684641 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnnzk\" (UniqueName: \"kubernetes.io/projected/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-kube-api-access-wnnzk\") pod \"community-operators-4vsqx\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.735407 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.735622 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.235589238 +0000 UTC m=+156.804617238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.735927 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rggd6\" (UniqueName: \"kubernetes.io/projected/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-kube-api-access-rggd6\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.736133 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-utilities\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.736934 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-catalog-content\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.737091 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.737572 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.237558502 +0000 UTC m=+156.806586552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.757917 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.836428 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-64xq9"] Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.837461 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.838200 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.838348 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rggd6\" (UniqueName: \"kubernetes.io/projected/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-kube-api-access-rggd6\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.838377 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-utilities\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.838407 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-catalog-content\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.838890 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-catalog-content\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.838940 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.338916235 +0000 UTC m=+156.907944225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.839100 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-utilities\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.849183 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64xq9"] Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.867221 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rggd6\" (UniqueName: \"kubernetes.io/projected/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-kube-api-access-rggd6\") pod \"certified-operators-5kv9l\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.939444 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-catalog-content\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.939786 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-utilities\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.939900 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrkc4\" (UniqueName: \"kubernetes.io/projected/d746d881-44be-4a9e-8b0f-f619328d1610-kube-api-access-rrkc4\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.940003 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:12 crc kubenswrapper[4662]: E1208 09:17:12.940325 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.44030862 +0000 UTC m=+157.009336610 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:12 crc kubenswrapper[4662]: I1208 09:17:12.946092 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.041413 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.041546 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.541521989 +0000 UTC m=+157.110549979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.041813 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-utilities\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.041837 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrkc4\" (UniqueName: \"kubernetes.io/projected/d746d881-44be-4a9e-8b0f-f619328d1610-kube-api-access-rrkc4\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.041865 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.041937 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-catalog-content\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.042262 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-utilities\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.042320 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-catalog-content\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.042549 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.542538637 +0000 UTC m=+157.111566627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.050026 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-557kf"] Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.050937 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.072179 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-557kf"] Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.078087 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrkc4\" (UniqueName: \"kubernetes.io/projected/d746d881-44be-4a9e-8b0f-f619328d1610-kube-api-access-rrkc4\") pod \"community-operators-64xq9\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.142728 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.142908 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-catalog-content\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.142934 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-utilities\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.143042 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.643026346 +0000 UTC m=+157.212054336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.143097 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5wf\" (UniqueName: \"kubernetes.io/projected/c761fbd0-5303-4e6c-bd2c-509135843e80-kube-api-access-nf5wf\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.156169 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.182216 4662 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.244078 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nf5wf\" (UniqueName: \"kubernetes.io/projected/c761fbd0-5303-4e6c-bd2c-509135843e80-kube-api-access-nf5wf\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.244124 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-catalog-content\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.244144 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-utilities\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.244171 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.244433 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.744421991 +0000 UTC m=+157.313449981 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.245535 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-catalog-content\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.247433 4662 generic.go:334] "Generic (PLEG): container finished" podID="a49a2742-5f89-4a17-a477-dffb8db27f9c" containerID="19eab07c52b62e4f105b59cdb1761238b3f2a48c0247136f75d7e73c7b210e38" exitCode=0 Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.247517 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" event={"ID":"a49a2742-5f89-4a17-a477-dffb8db27f9c","Type":"ContainerDied","Data":"19eab07c52b62e4f105b59cdb1761238b3f2a48c0247136f75d7e73c7b210e38"} Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.250637 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-utilities\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.287154 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" event={"ID":"63c64b29-5b57-481f-b4b2-c92498738c8a","Type":"ContainerStarted","Data":"af693f47e9be0791d046a819e71985e563761b9be90a5c5102af37ecd03c606e"} Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.287417 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" event={"ID":"63c64b29-5b57-481f-b4b2-c92498738c8a","Type":"ContainerStarted","Data":"11b934cc4f54a82d74da14cc8a5d724f0214cf7f4528fccba800e36f9acad9b0"} Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.290977 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nf5wf\" (UniqueName: \"kubernetes.io/projected/c761fbd0-5303-4e6c-bd2c-509135843e80-kube-api-access-nf5wf\") pod \"certified-operators-557kf\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.353498 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.354517 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.854496491 +0000 UTC m=+157.423524481 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.364363 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.399403 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" podStartSLOduration=12.39938124 podStartE2EDuration="12.39938124s" podCreationTimestamp="2025-12-08 09:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:13.396200314 +0000 UTC m=+156.965228314" watchObservedRunningTime="2025-12-08 09:17:13.39938124 +0000 UTC m=+156.968409230" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.455530 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.455868 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:13.955857464 +0000 UTC m=+157.524885454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.512358 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4vsqx"] Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.557125 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.557277 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:14.057254009 +0000 UTC m=+157.626281999 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.557396 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.557674 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:14.05766298 +0000 UTC m=+157.626690960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: W1208 09:17:13.558081 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod51f862c6_f9e4_4c88_b0fa_5b4e5acb7856.slice/crio-662644d5941a697b26e6ca5ea2a3b557c5a6569cec8d6042893255afdb1d9045 WatchSource:0}: Error finding container 662644d5941a697b26e6ca5ea2a3b557c5a6569cec8d6042893255afdb1d9045: Status 404 returned error can't find the container with id 662644d5941a697b26e6ca5ea2a3b557c5a6569cec8d6042893255afdb1d9045 Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.604604 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5kv9l"] Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.658454 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.659232 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-12-08 09:17:14.159217358 +0000 UTC m=+157.728245348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.668856 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.676485 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:13 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:13 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:13 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.676545 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.741826 4662 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T09:17:13.182241172Z","Handler":null,"Name":""} Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.749391 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-64xq9"] Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.760676 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:13 crc kubenswrapper[4662]: E1208 09:17:13.762206 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-12-08 09:17:14.262193356 +0000 UTC m=+157.831221346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4nw69" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.764953 4662 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.764991 4662 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.838211 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.838247 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.839945 4662 patch_prober.go:28] interesting pod/console-f9d7485db-9lp67 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.840014 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lp67" podUID="a0cf72db-464e-4859-bec6-0e3d456e10aa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.860055 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.863490 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.864848 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.871937 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.873580 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.873807 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.880020 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.885665 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.966913 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.966966 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.967008 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.992774 4662 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 09:17:13 crc kubenswrapper[4662]: I1208 09:17:13.992812 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.067716 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.067841 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.067881 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.083151 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-557kf"] Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.089886 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:14 crc kubenswrapper[4662]: W1208 09:17:14.098211 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc761fbd0_5303_4e6c_bd2c_509135843e80.slice/crio-7f6525c761d0674b1224c6e7c6a1c5facd0eeaee43866944848519b4399e2894 WatchSource:0}: Error finding container 7f6525c761d0674b1224c6e7c6a1c5facd0eeaee43866944848519b4399e2894: Status 404 returned error can't find the container with id 7f6525c761d0674b1224c6e7c6a1c5facd0eeaee43866944848519b4399e2894 Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.133317 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4nw69\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.203710 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.297845 4662 generic.go:334] "Generic (PLEG): container finished" podID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerID="14bc472d321038eaec1641fca0d1eabcd859a8cf7d5d911990e7a73a1544a8ba" exitCode=0 Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.297915 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vsqx" event={"ID":"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856","Type":"ContainerDied","Data":"14bc472d321038eaec1641fca0d1eabcd859a8cf7d5d911990e7a73a1544a8ba"} Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.297943 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vsqx" event={"ID":"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856","Type":"ContainerStarted","Data":"662644d5941a697b26e6ca5ea2a3b557c5a6569cec8d6042893255afdb1d9045"} Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.302520 4662 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.304125 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-557kf" event={"ID":"c761fbd0-5303-4e6c-bd2c-509135843e80","Type":"ContainerStarted","Data":"7f6525c761d0674b1224c6e7c6a1c5facd0eeaee43866944848519b4399e2894"} Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.312192 4662 generic.go:334] "Generic (PLEG): container finished" podID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerID="e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8" exitCode=0 Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.312825 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kv9l" event={"ID":"fd5d97ce-9155-4ca4-856a-99db2e1b46a6","Type":"ContainerDied","Data":"e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8"} Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.312881 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kv9l" event={"ID":"fd5d97ce-9155-4ca4-856a-99db2e1b46a6","Type":"ContainerStarted","Data":"a504a2642af0db023a2286fbeded2f9ebb736c0c2293b42ed2fa3b2173f916b8"} Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.314890 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-x57s5" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.320226 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.353331 4662 generic.go:334] "Generic (PLEG): container finished" podID="d746d881-44be-4a9e-8b0f-f619328d1610" containerID="091e513535c6ed9004759f79e11f1f56fe2db4f29bec27a1e6b39a88771bb93e" exitCode=0 Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.354288 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64xq9" event={"ID":"d746d881-44be-4a9e-8b0f-f619328d1610","Type":"ContainerDied","Data":"091e513535c6ed9004759f79e11f1f56fe2db4f29bec27a1e6b39a88771bb93e"} Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.354314 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64xq9" event={"ID":"d746d881-44be-4a9e-8b0f-f619328d1610","Type":"ContainerStarted","Data":"ef1f678754651bcbb6d77077763fa304b4c2818d3ac7699a4ccd39c3e651713f"} Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.450376 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jzx85"] Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.451683 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.454382 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.457227 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzx85"] Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.477417 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-catalog-content\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.477494 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k62l\" (UniqueName: \"kubernetes.io/projected/286d64f0-03e6-439e-80d6-be053d25d93b-kube-api-access-6k62l\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.477542 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-utilities\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.567229 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.578284 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-utilities\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.578339 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-catalog-content\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.578387 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6k62l\" (UniqueName: \"kubernetes.io/projected/286d64f0-03e6-439e-80d6-be053d25d93b-kube-api-access-6k62l\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.579136 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-utilities\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.579330 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-catalog-content\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.620904 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6k62l\" (UniqueName: \"kubernetes.io/projected/286d64f0-03e6-439e-80d6-be053d25d93b-kube-api-access-6k62l\") pod \"redhat-marketplace-jzx85\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.679229 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:14 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:14 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:14 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.679551 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.709825 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.724967 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.781847 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a49a2742-5f89-4a17-a477-dffb8db27f9c-config-volume\") pod \"a49a2742-5f89-4a17-a477-dffb8db27f9c\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.781944 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j84xm\" (UniqueName: \"kubernetes.io/projected/a49a2742-5f89-4a17-a477-dffb8db27f9c-kube-api-access-j84xm\") pod \"a49a2742-5f89-4a17-a477-dffb8db27f9c\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.781973 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a49a2742-5f89-4a17-a477-dffb8db27f9c-secret-volume\") pod \"a49a2742-5f89-4a17-a477-dffb8db27f9c\" (UID: \"a49a2742-5f89-4a17-a477-dffb8db27f9c\") " Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.785438 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.787211 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a49a2742-5f89-4a17-a477-dffb8db27f9c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a49a2742-5f89-4a17-a477-dffb8db27f9c" (UID: "a49a2742-5f89-4a17-a477-dffb8db27f9c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.796445 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a49a2742-5f89-4a17-a477-dffb8db27f9c-config-volume" (OuterVolumeSpecName: "config-volume") pod "a49a2742-5f89-4a17-a477-dffb8db27f9c" (UID: "a49a2742-5f89-4a17-a477-dffb8db27f9c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.796900 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49a2742-5f89-4a17-a477-dffb8db27f9c-kube-api-access-j84xm" (OuterVolumeSpecName: "kube-api-access-j84xm") pod "a49a2742-5f89-4a17-a477-dffb8db27f9c" (UID: "a49a2742-5f89-4a17-a477-dffb8db27f9c"). InnerVolumeSpecName "kube-api-access-j84xm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.837215 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvfv4"] Dec 08 09:17:14 crc kubenswrapper[4662]: E1208 09:17:14.837485 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49a2742-5f89-4a17-a477-dffb8db27f9c" containerName="collect-profiles" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.837500 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49a2742-5f89-4a17-a477-dffb8db27f9c" containerName="collect-profiles" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.837635 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49a2742-5f89-4a17-a477-dffb8db27f9c" containerName="collect-profiles" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.838570 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.857753 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvfv4"] Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.883649 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-catalog-content\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.883814 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-utilities\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.883851 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjfd\" (UniqueName: \"kubernetes.io/projected/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-kube-api-access-cjjfd\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.883888 4662 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a49a2742-5f89-4a17-a477-dffb8db27f9c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.883928 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j84xm\" (UniqueName: \"kubernetes.io/projected/a49a2742-5f89-4a17-a477-dffb8db27f9c-kube-api-access-j84xm\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.883939 4662 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a49a2742-5f89-4a17-a477-dffb8db27f9c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.977165 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nw69"] Dec 08 09:17:14 crc kubenswrapper[4662]: W1208 09:17:14.982872 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87f08450_5929_4441_88f4_fbaec18e0f73.slice/crio-bc6b78d83f108170f6150810201863d6a2d05b1de1f35a695d00df014cef352e WatchSource:0}: Error finding container bc6b78d83f108170f6150810201863d6a2d05b1de1f35a695d00df014cef352e: Status 404 returned error can't find the container with id bc6b78d83f108170f6150810201863d6a2d05b1de1f35a695d00df014cef352e Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.984712 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-utilities\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.984757 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjjfd\" (UniqueName: \"kubernetes.io/projected/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-kube-api-access-cjjfd\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.984805 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-catalog-content\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.985452 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-utilities\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:14 crc kubenswrapper[4662]: I1208 09:17:14.985584 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-catalog-content\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.000789 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjjfd\" (UniqueName: \"kubernetes.io/projected/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-kube-api-access-cjjfd\") pod \"redhat-marketplace-mvfv4\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.031494 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzx85"] Dec 08 09:17:15 crc kubenswrapper[4662]: W1208 09:17:15.043530 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod286d64f0_03e6_439e_80d6_be053d25d93b.slice/crio-ba92682ba9f57d02733e253196b244eaf1fdf48b550157630d6ceb832b6e9027 WatchSource:0}: Error finding container ba92682ba9f57d02733e253196b244eaf1fdf48b550157630d6ceb832b6e9027: Status 404 returned error can't find the container with id ba92682ba9f57d02733e253196b244eaf1fdf48b550157630d6ceb832b6e9027 Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.160083 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.365247 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvfv4"] Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.376481 4662 generic.go:334] "Generic (PLEG): container finished" podID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerID="6aabb8ca8de6abf3b5d7ffe5a13d446fc4985a2c0390bc70f95ddb7412e88dab" exitCode=0 Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.376552 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-557kf" event={"ID":"c761fbd0-5303-4e6c-bd2c-509135843e80","Type":"ContainerDied","Data":"6aabb8ca8de6abf3b5d7ffe5a13d446fc4985a2c0390bc70f95ddb7412e88dab"} Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.385726 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" event={"ID":"a49a2742-5f89-4a17-a477-dffb8db27f9c","Type":"ContainerDied","Data":"e18e108294a69027d7d1984686ccbcaf96ce9f76841f8a135b5dd88f1b49c849"} Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.385795 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e18e108294a69027d7d1984686ccbcaf96ce9f76841f8a135b5dd88f1b49c849" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.385868 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.388833 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f3d3b932-d519-4458-bd2f-4d747a16a5b7","Type":"ContainerStarted","Data":"9a51f79cd27c669388d0b6ac4fef3ebddc5fd9a271b21b393d7617c9f4c15207"} Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.389825 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzx85" event={"ID":"286d64f0-03e6-439e-80d6-be053d25d93b","Type":"ContainerStarted","Data":"ba92682ba9f57d02733e253196b244eaf1fdf48b550157630d6ceb832b6e9027"} Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.391476 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" event={"ID":"87f08450-5929-4441-88f4-fbaec18e0f73","Type":"ContainerStarted","Data":"bc6b78d83f108170f6150810201863d6a2d05b1de1f35a695d00df014cef352e"} Dec 08 09:17:15 crc kubenswrapper[4662]: W1208 09:17:15.406390 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2cd2fc20_49c6_49c1_8111_a3ea33c114d1.slice/crio-8bb8e0fb48e5d4d4868583fb33f284f49b67d2d09c9baf35a8941a5519318ca4 WatchSource:0}: Error finding container 8bb8e0fb48e5d4d4868583fb33f284f49b67d2d09c9baf35a8941a5519318ca4: Status 404 returned error can't find the container with id 8bb8e0fb48e5d4d4868583fb33f284f49b67d2d09c9baf35a8941a5519318ca4 Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.671395 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:15 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:15 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:15 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.672010 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.834880 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pfnf8"] Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.835964 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.838047 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.845156 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfnf8"] Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.896656 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-utilities\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.896770 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-catalog-content\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.896824 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/5edca319-b627-48cd-b8ed-164446cefc08-kube-api-access-pkz5n\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.997375 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-catalog-content\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.997630 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/5edca319-b627-48cd-b8ed-164446cefc08-kube-api-access-pkz5n\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.997721 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-utilities\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.997979 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-catalog-content\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:15 crc kubenswrapper[4662]: I1208 09:17:15.998215 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-utilities\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.024776 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/5edca319-b627-48cd-b8ed-164446cefc08-kube-api-access-pkz5n\") pod \"redhat-operators-pfnf8\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.153994 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.230328 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nqjvm"] Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.231543 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.248893 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqjvm"] Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.302853 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-catalog-content\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.302974 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5l9b\" (UniqueName: \"kubernetes.io/projected/ad586273-365b-4e31-bcc4-d78731f84d8c-kube-api-access-k5l9b\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.303053 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-utilities\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.376492 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pfnf8"] Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.398555 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfnf8" event={"ID":"5edca319-b627-48cd-b8ed-164446cefc08","Type":"ContainerStarted","Data":"0c8532fbf75eaeef890ded86481f0ae351fb14386b1399da01c45c00465d7c84"} Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.400037 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzx85" event={"ID":"286d64f0-03e6-439e-80d6-be053d25d93b","Type":"ContainerStarted","Data":"016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47"} Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.401009 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvfv4" event={"ID":"2cd2fc20-49c6-49c1-8111-a3ea33c114d1","Type":"ContainerStarted","Data":"8bb8e0fb48e5d4d4868583fb33f284f49b67d2d09c9baf35a8941a5519318ca4"} Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.404449 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5l9b\" (UniqueName: \"kubernetes.io/projected/ad586273-365b-4e31-bcc4-d78731f84d8c-kube-api-access-k5l9b\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.404504 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-utilities\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.404570 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-catalog-content\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.405106 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-utilities\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.405227 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-catalog-content\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.427603 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5l9b\" (UniqueName: \"kubernetes.io/projected/ad586273-365b-4e31-bcc4-d78731f84d8c-kube-api-access-k5l9b\") pod \"redhat-operators-nqjvm\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.559361 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.673042 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:16 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:16 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:16 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.673299 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:16 crc kubenswrapper[4662]: I1208 09:17:16.804287 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nqjvm"] Dec 08 09:17:16 crc kubenswrapper[4662]: W1208 09:17:16.811525 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad586273_365b_4e31_bcc4_d78731f84d8c.slice/crio-4e4f27fdff350aa3df55a4ea2ff8e2f6fad2668c0bf6cd262bfe466b046dc192 WatchSource:0}: Error finding container 4e4f27fdff350aa3df55a4ea2ff8e2f6fad2668c0bf6cd262bfe466b046dc192: Status 404 returned error can't find the container with id 4e4f27fdff350aa3df55a4ea2ff8e2f6fad2668c0bf6cd262bfe466b046dc192 Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.316245 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.321403 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/42f18be0-5f4b-4e53-ac80-451fbfc548bf-metrics-certs\") pod \"network-metrics-daemon-hd7m7\" (UID: \"42f18be0-5f4b-4e53-ac80-451fbfc548bf\") " pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.407180 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f3d3b932-d519-4458-bd2f-4d747a16a5b7","Type":"ContainerStarted","Data":"a6f358396f9c1ea7cd5bf31985cdcf34e5e3ce4d772adcdf0f3853026378d605"} Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.408089 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqjvm" event={"ID":"ad586273-365b-4e31-bcc4-d78731f84d8c","Type":"ContainerStarted","Data":"4e4f27fdff350aa3df55a4ea2ff8e2f6fad2668c0bf6cd262bfe466b046dc192"} Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.409358 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" event={"ID":"87f08450-5929-4441-88f4-fbaec18e0f73","Type":"ContainerStarted","Data":"fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0"} Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.411101 4662 generic.go:334] "Generic (PLEG): container finished" podID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerID="44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb" exitCode=0 Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.411154 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvfv4" event={"ID":"2cd2fc20-49c6-49c1-8111-a3ea33c114d1","Type":"ContainerDied","Data":"44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb"} Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.412481 4662 generic.go:334] "Generic (PLEG): container finished" podID="286d64f0-03e6-439e-80d6-be053d25d93b" containerID="016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47" exitCode=0 Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.412507 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzx85" event={"ID":"286d64f0-03e6-439e-80d6-be053d25d93b","Type":"ContainerDied","Data":"016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47"} Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.453516 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hd7m7" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.634393 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.645953 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-6fclc" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.682097 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:17 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:17 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:17 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.682143 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.767471 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.768393 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.776537 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.776613 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.791293 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.934691 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:17 crc kubenswrapper[4662]: I1208 09:17:17.934785 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.035342 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.035713 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.036170 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.077553 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.112460 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hd7m7"] Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.136408 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:18 crc kubenswrapper[4662]: W1208 09:17:18.140466 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42f18be0_5f4b_4e53_ac80_451fbfc548bf.slice/crio-efee725a49ff27829c44ebd10eaab7430200294e52190d71a4842c3a0ec6d31e WatchSource:0}: Error finding container efee725a49ff27829c44ebd10eaab7430200294e52190d71a4842c3a0ec6d31e: Status 404 returned error can't find the container with id efee725a49ff27829c44ebd10eaab7430200294e52190d71a4842c3a0ec6d31e Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.514703 4662 generic.go:334] "Generic (PLEG): container finished" podID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerID="0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216" exitCode=0 Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.515015 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqjvm" event={"ID":"ad586273-365b-4e31-bcc4-d78731f84d8c","Type":"ContainerDied","Data":"0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216"} Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.520283 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" event={"ID":"42f18be0-5f4b-4e53-ac80-451fbfc548bf","Type":"ContainerStarted","Data":"efee725a49ff27829c44ebd10eaab7430200294e52190d71a4842c3a0ec6d31e"} Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.527194 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.530207 4662 generic.go:334] "Generic (PLEG): container finished" podID="f3d3b932-d519-4458-bd2f-4d747a16a5b7" containerID="a6f358396f9c1ea7cd5bf31985cdcf34e5e3ce4d772adcdf0f3853026378d605" exitCode=0 Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.530545 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f3d3b932-d519-4458-bd2f-4d747a16a5b7","Type":"ContainerDied","Data":"a6f358396f9c1ea7cd5bf31985cdcf34e5e3ce4d772adcdf0f3853026378d605"} Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.538184 4662 generic.go:334] "Generic (PLEG): container finished" podID="5edca319-b627-48cd-b8ed-164446cefc08" containerID="dfe364e9f2e3d3d05e64c9a8cd6293aaf748b7c509a6c80860996dcdc5ba6112" exitCode=0 Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.539229 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfnf8" event={"ID":"5edca319-b627-48cd-b8ed-164446cefc08","Type":"ContainerDied","Data":"dfe364e9f2e3d3d05e64c9a8cd6293aaf748b7c509a6c80860996dcdc5ba6112"} Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.619556 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" podStartSLOduration=143.619540161 podStartE2EDuration="2m23.619540161s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:18.618957265 +0000 UTC m=+162.187985255" watchObservedRunningTime="2025-12-08 09:17:18.619540161 +0000 UTC m=+162.188568141" Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.671479 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:18 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:18 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:18 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:18 crc kubenswrapper[4662]: I1208 09:17:18.671548 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:19 crc kubenswrapper[4662]: I1208 09:17:19.107222 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-2gcvz" Dec 08 09:17:19 crc kubenswrapper[4662]: I1208 09:17:19.558129 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"57dbb046-a397-4fbc-8ec9-11e5f923b8a6","Type":"ContainerStarted","Data":"8f25859a7ddb781175a3632c94e033fc730f4f7792cb526395bfb045b2498438"} Dec 08 09:17:19 crc kubenswrapper[4662]: I1208 09:17:19.567378 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" event={"ID":"42f18be0-5f4b-4e53-ac80-451fbfc548bf","Type":"ContainerStarted","Data":"c54adaeebbf3695574f18407300826160461a38633c83f8bc7d1368a59073c32"} Dec 08 09:17:19 crc kubenswrapper[4662]: I1208 09:17:19.567421 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hd7m7" event={"ID":"42f18be0-5f4b-4e53-ac80-451fbfc548bf","Type":"ContainerStarted","Data":"dee67aa13dbbaa20886424156c66eb3c2a24c4102a39dbd7c5a24d4d257eef2d"} Dec 08 09:17:19 crc kubenswrapper[4662]: I1208 09:17:19.593721 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hd7m7" podStartSLOduration=144.593699793 podStartE2EDuration="2m24.593699793s" podCreationTimestamp="2025-12-08 09:14:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:19.582781417 +0000 UTC m=+163.151809407" watchObservedRunningTime="2025-12-08 09:17:19.593699793 +0000 UTC m=+163.162727783" Dec 08 09:17:19 crc kubenswrapper[4662]: I1208 09:17:19.677426 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:19 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:19 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:19 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:19 crc kubenswrapper[4662]: I1208 09:17:19.677569 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.032348 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.203422 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kubelet-dir\") pod \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.203818 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kube-api-access\") pod \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\" (UID: \"f3d3b932-d519-4458-bd2f-4d747a16a5b7\") " Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.203574 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f3d3b932-d519-4458-bd2f-4d747a16a5b7" (UID: "f3d3b932-d519-4458-bd2f-4d747a16a5b7"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.204142 4662 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.237575 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f3d3b932-d519-4458-bd2f-4d747a16a5b7" (UID: "f3d3b932-d519-4458-bd2f-4d747a16a5b7"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.306401 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f3d3b932-d519-4458-bd2f-4d747a16a5b7-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.587666 4662 generic.go:334] "Generic (PLEG): container finished" podID="57dbb046-a397-4fbc-8ec9-11e5f923b8a6" containerID="013df97d0115ab09967b3dbf43f862cd78822a004614c85e011d9e038ccffae1" exitCode=0 Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.587778 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"57dbb046-a397-4fbc-8ec9-11e5f923b8a6","Type":"ContainerDied","Data":"013df97d0115ab09967b3dbf43f862cd78822a004614c85e011d9e038ccffae1"} Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.593117 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.593509 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"f3d3b932-d519-4458-bd2f-4d747a16a5b7","Type":"ContainerDied","Data":"9a51f79cd27c669388d0b6ac4fef3ebddc5fd9a271b21b393d7617c9f4c15207"} Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.593544 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a51f79cd27c669388d0b6ac4fef3ebddc5fd9a271b21b393d7617c9f4c15207" Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.670656 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:20 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:20 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:20 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:20 crc kubenswrapper[4662]: I1208 09:17:20.670724 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:21 crc kubenswrapper[4662]: I1208 09:17:21.670241 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:21 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:21 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:21 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:21 crc kubenswrapper[4662]: I1208 09:17:21.670305 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.047036 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.243709 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kube-api-access\") pod \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.243791 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kubelet-dir\") pod \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\" (UID: \"57dbb046-a397-4fbc-8ec9-11e5f923b8a6\") " Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.244027 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "57dbb046-a397-4fbc-8ec9-11e5f923b8a6" (UID: "57dbb046-a397-4fbc-8ec9-11e5f923b8a6"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.247949 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "57dbb046-a397-4fbc-8ec9-11e5f923b8a6" (UID: "57dbb046-a397-4fbc-8ec9-11e5f923b8a6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.311584 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-xntqx" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.345646 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.345685 4662 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/57dbb046-a397-4fbc-8ec9-11e5f923b8a6-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.668250 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"57dbb046-a397-4fbc-8ec9-11e5f923b8a6","Type":"ContainerDied","Data":"8f25859a7ddb781175a3632c94e033fc730f4f7792cb526395bfb045b2498438"} Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.668296 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f25859a7ddb781175a3632c94e033fc730f4f7792cb526395bfb045b2498438" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.668357 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.669455 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:22 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:22 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:22 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:22 crc kubenswrapper[4662]: I1208 09:17:22.669514 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:23 crc kubenswrapper[4662]: I1208 09:17:23.670361 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:23 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:23 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:23 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:23 crc kubenswrapper[4662]: I1208 09:17:23.670425 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:23 crc kubenswrapper[4662]: I1208 09:17:23.836269 4662 patch_prober.go:28] interesting pod/console-f9d7485db-9lp67 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Dec 08 09:17:23 crc kubenswrapper[4662]: I1208 09:17:23.836326 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-9lp67" podUID="a0cf72db-464e-4859-bec6-0e3d456e10aa" containerName="console" probeResult="failure" output="Get \"https://10.217.0.22:8443/health\": dial tcp 10.217.0.22:8443: connect: connection refused" Dec 08 09:17:24 crc kubenswrapper[4662]: I1208 09:17:24.321682 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:24 crc kubenswrapper[4662]: I1208 09:17:24.671192 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:24 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:24 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:24 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:24 crc kubenswrapper[4662]: I1208 09:17:24.671241 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:25 crc kubenswrapper[4662]: I1208 09:17:25.669952 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:25 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:25 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:25 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:25 crc kubenswrapper[4662]: I1208 09:17:25.670020 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:26 crc kubenswrapper[4662]: I1208 09:17:26.670052 4662 patch_prober.go:28] interesting pod/router-default-5444994796-s5f92 container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 09:17:26 crc kubenswrapper[4662]: [-]has-synced failed: reason withheld Dec 08 09:17:26 crc kubenswrapper[4662]: [+]process-running ok Dec 08 09:17:26 crc kubenswrapper[4662]: healthz check failed Dec 08 09:17:26 crc kubenswrapper[4662]: I1208 09:17:26.670111 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-s5f92" podUID="28226390-eaa7-48f5-8886-b50a88f4b37c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:17:27 crc kubenswrapper[4662]: I1208 09:17:27.670032 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:27 crc kubenswrapper[4662]: I1208 09:17:27.672960 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-s5f92" Dec 08 09:17:32 crc kubenswrapper[4662]: I1208 09:17:32.611841 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:17:32 crc kubenswrapper[4662]: I1208 09:17:32.612411 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:17:33 crc kubenswrapper[4662]: I1208 09:17:33.841250 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:33 crc kubenswrapper[4662]: I1208 09:17:33.847466 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:17:34 crc kubenswrapper[4662]: I1208 09:17:34.339306 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:17:42 crc kubenswrapper[4662]: I1208 09:17:42.923852 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Dec 08 09:17:43 crc kubenswrapper[4662]: I1208 09:17:43.994660 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-sgt6f" Dec 08 09:17:48 crc kubenswrapper[4662]: E1208 09:17:48.292235 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Dec 08 09:17:48 crc kubenswrapper[4662]: E1208 09:17:48.292763 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nf5wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-557kf_openshift-marketplace(c761fbd0-5303-4e6c-bd2c-509135843e80): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 08 09:17:48 crc kubenswrapper[4662]: E1208 09:17:48.294381 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-557kf" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.354129 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 08 09:17:49 crc kubenswrapper[4662]: E1208 09:17:49.354609 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d3b932-d519-4458-bd2f-4d747a16a5b7" containerName="pruner" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.354621 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d3b932-d519-4458-bd2f-4d747a16a5b7" containerName="pruner" Dec 08 09:17:49 crc kubenswrapper[4662]: E1208 09:17:49.354633 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57dbb046-a397-4fbc-8ec9-11e5f923b8a6" containerName="pruner" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.354639 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="57dbb046-a397-4fbc-8ec9-11e5f923b8a6" containerName="pruner" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.354725 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="57dbb046-a397-4fbc-8ec9-11e5f923b8a6" containerName="pruner" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.354867 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d3b932-d519-4458-bd2f-4d747a16a5b7" containerName="pruner" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.355203 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.358267 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.359531 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.361985 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.448521 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6ec3fd-846e-4471-904e-2a1731ca8167-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.448607 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6ec3fd-846e-4471-904e-2a1731ca8167-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.556734 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6ec3fd-846e-4471-904e-2a1731ca8167-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.556853 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6ec3fd-846e-4471-904e-2a1731ca8167-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.556855 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6ec3fd-846e-4471-904e-2a1731ca8167-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.579125 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6ec3fd-846e-4471-904e-2a1731ca8167-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:49 crc kubenswrapper[4662]: I1208 09:17:49.686832 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:52 crc kubenswrapper[4662]: E1208 09:17:52.489814 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-557kf" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" Dec 08 09:17:52 crc kubenswrapper[4662]: E1208 09:17:52.581714 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 08 09:17:52 crc kubenswrapper[4662]: E1208 09:17:52.581908 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pkz5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pfnf8_openshift-marketplace(5edca319-b627-48cd-b8ed-164446cefc08): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 08 09:17:52 crc kubenswrapper[4662]: E1208 09:17:52.583677 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pfnf8" podUID="5edca319-b627-48cd-b8ed-164446cefc08" Dec 08 09:17:53 crc kubenswrapper[4662]: E1208 09:17:53.785632 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-pfnf8" podUID="5edca319-b627-48cd-b8ed-164446cefc08" Dec 08 09:17:53 crc kubenswrapper[4662]: E1208 09:17:53.871306 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 08 09:17:53 crc kubenswrapper[4662]: E1208 09:17:53.871460 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cjjfd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mvfv4_openshift-marketplace(2cd2fc20-49c6-49c1-8111-a3ea33c114d1): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 08 09:17:53 crc kubenswrapper[4662]: E1208 09:17:53.872898 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mvfv4" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.761839 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.763371 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.766194 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.834213 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.834255 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-var-lock\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.834331 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kube-api-access\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.935689 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.936028 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-var-lock\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.936062 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-var-lock\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.935839 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kubelet-dir\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.936194 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kube-api-access\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:54 crc kubenswrapper[4662]: I1208 09:17:54.958016 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kube-api-access\") pod \"installer-9-crc\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:55 crc kubenswrapper[4662]: I1208 09:17:55.087189 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.762077 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.762627 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6k62l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jzx85_openshift-marketplace(286d64f0-03e6-439e-80d6-be053d25d93b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.763267 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.763339 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wnnzk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-4vsqx_openshift-marketplace(51f862c6-f9e4-4c88-b0fa-5b4e5acb7856): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.764796 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-4vsqx" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.767456 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jzx85" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.772991 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.773140 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rrkc4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-64xq9_openshift-marketplace(d746d881-44be-4a9e-8b0f-f619328d1610): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.774266 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-64xq9" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.824021 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.824207 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k5l9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nqjvm_openshift-marketplace(ad586273-365b-4e31-bcc4-d78731f84d8c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.825983 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nqjvm" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.882639 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jzx85" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.883235 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-4vsqx" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.884604 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nqjvm" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" Dec 08 09:17:55 crc kubenswrapper[4662]: E1208 09:17:55.886193 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-64xq9" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.061162 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.144334 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.887785 4662 generic.go:334] "Generic (PLEG): container finished" podID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerID="2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8" exitCode=0 Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.887960 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kv9l" event={"ID":"fd5d97ce-9155-4ca4-856a-99db2e1b46a6","Type":"ContainerDied","Data":"2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8"} Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.890663 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e6ec3fd-846e-4471-904e-2a1731ca8167","Type":"ContainerStarted","Data":"fdf52201bd0326bf10c5075ae924120b2b053684c94842cc0d082d2bbdf95cfe"} Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.890691 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e6ec3fd-846e-4471-904e-2a1731ca8167","Type":"ContainerStarted","Data":"7908ac1bb598d78757bb9cf0e0fb50772302a0c8ad85173e001879882cfb0448"} Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.899144 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1","Type":"ContainerStarted","Data":"75f5bc9eb8e468fe3d90dd78091355322cdc2347cdf1a84c96de45ca2d2bfb16"} Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.899190 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1","Type":"ContainerStarted","Data":"60880d44a5c97de2e7d93f3b055b5e25b8ef3f5d977acf174b2f3048b75c07f6"} Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.928384 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=7.928364517 podStartE2EDuration="7.928364517s" podCreationTimestamp="2025-12-08 09:17:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:56.926324011 +0000 UTC m=+200.495352001" watchObservedRunningTime="2025-12-08 09:17:56.928364517 +0000 UTC m=+200.497392517" Dec 08 09:17:56 crc kubenswrapper[4662]: I1208 09:17:56.946613 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.946589365 podStartE2EDuration="2.946589365s" podCreationTimestamp="2025-12-08 09:17:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:17:56.945005562 +0000 UTC m=+200.514033562" watchObservedRunningTime="2025-12-08 09:17:56.946589365 +0000 UTC m=+200.515617355" Dec 08 09:17:57 crc kubenswrapper[4662]: I1208 09:17:57.906765 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kv9l" event={"ID":"fd5d97ce-9155-4ca4-856a-99db2e1b46a6","Type":"ContainerStarted","Data":"f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0"} Dec 08 09:17:57 crc kubenswrapper[4662]: I1208 09:17:57.910533 4662 generic.go:334] "Generic (PLEG): container finished" podID="2e6ec3fd-846e-4471-904e-2a1731ca8167" containerID="fdf52201bd0326bf10c5075ae924120b2b053684c94842cc0d082d2bbdf95cfe" exitCode=0 Dec 08 09:17:57 crc kubenswrapper[4662]: I1208 09:17:57.910599 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e6ec3fd-846e-4471-904e-2a1731ca8167","Type":"ContainerDied","Data":"fdf52201bd0326bf10c5075ae924120b2b053684c94842cc0d082d2bbdf95cfe"} Dec 08 09:17:57 crc kubenswrapper[4662]: I1208 09:17:57.923759 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5kv9l" podStartSLOduration=3.022358123 podStartE2EDuration="45.923723497s" podCreationTimestamp="2025-12-08 09:17:12 +0000 UTC" firstStartedPulling="2025-12-08 09:17:14.367255821 +0000 UTC m=+157.936283811" lastFinishedPulling="2025-12-08 09:17:57.268621195 +0000 UTC m=+200.837649185" observedRunningTime="2025-12-08 09:17:57.921842776 +0000 UTC m=+201.490870776" watchObservedRunningTime="2025-12-08 09:17:57.923723497 +0000 UTC m=+201.492751517" Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.208693 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.309470 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6ec3fd-846e-4471-904e-2a1731ca8167-kube-api-access\") pod \"2e6ec3fd-846e-4471-904e-2a1731ca8167\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.309563 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6ec3fd-846e-4471-904e-2a1731ca8167-kubelet-dir\") pod \"2e6ec3fd-846e-4471-904e-2a1731ca8167\" (UID: \"2e6ec3fd-846e-4471-904e-2a1731ca8167\") " Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.309924 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e6ec3fd-846e-4471-904e-2a1731ca8167-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e6ec3fd-846e-4471-904e-2a1731ca8167" (UID: "2e6ec3fd-846e-4471-904e-2a1731ca8167"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.316000 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6ec3fd-846e-4471-904e-2a1731ca8167-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e6ec3fd-846e-4471-904e-2a1731ca8167" (UID: "2e6ec3fd-846e-4471-904e-2a1731ca8167"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.411389 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e6ec3fd-846e-4471-904e-2a1731ca8167-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.411672 4662 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6ec3fd-846e-4471-904e-2a1731ca8167-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.921646 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"2e6ec3fd-846e-4471-904e-2a1731ca8167","Type":"ContainerDied","Data":"7908ac1bb598d78757bb9cf0e0fb50772302a0c8ad85173e001879882cfb0448"} Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.921687 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7908ac1bb598d78757bb9cf0e0fb50772302a0c8ad85173e001879882cfb0448" Dec 08 09:17:59 crc kubenswrapper[4662]: I1208 09:17:59.921796 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Dec 08 09:18:02 crc kubenswrapper[4662]: I1208 09:18:02.612088 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:18:02 crc kubenswrapper[4662]: I1208 09:18:02.612169 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:18:02 crc kubenswrapper[4662]: I1208 09:18:02.612228 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:18:02 crc kubenswrapper[4662]: I1208 09:18:02.612936 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:18:02 crc kubenswrapper[4662]: I1208 09:18:02.613077 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28" gracePeriod=600 Dec 08 09:18:02 crc kubenswrapper[4662]: I1208 09:18:02.947846 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:18:02 crc kubenswrapper[4662]: I1208 09:18:02.948197 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:18:03 crc kubenswrapper[4662]: I1208 09:18:03.102193 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:18:03 crc kubenswrapper[4662]: I1208 09:18:03.759117 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28" exitCode=0 Dec 08 09:18:03 crc kubenswrapper[4662]: I1208 09:18:03.759461 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28"} Dec 08 09:18:03 crc kubenswrapper[4662]: I1208 09:18:03.759528 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"e90e87142063402540b30d02a0f13b49b808a8a4a75b80f889930edd7b43a54f"} Dec 08 09:18:03 crc kubenswrapper[4662]: I1208 09:18:03.808974 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:18:06 crc kubenswrapper[4662]: I1208 09:18:06.776839 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-557kf" event={"ID":"c761fbd0-5303-4e6c-bd2c-509135843e80","Type":"ContainerStarted","Data":"8b0a628235e5137ca72ecf5031bb2da12825a6349740790a3430477f9076e8a5"} Dec 08 09:18:06 crc kubenswrapper[4662]: I1208 09:18:06.779499 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfnf8" event={"ID":"5edca319-b627-48cd-b8ed-164446cefc08","Type":"ContainerStarted","Data":"9a07db42306f58f2817fcf7f9e5a0f1ea827e38ae494ec8f8fc8ad35cc9d83af"} Dec 08 09:18:07 crc kubenswrapper[4662]: I1208 09:18:07.788714 4662 generic.go:334] "Generic (PLEG): container finished" podID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerID="8b0a628235e5137ca72ecf5031bb2da12825a6349740790a3430477f9076e8a5" exitCode=0 Dec 08 09:18:07 crc kubenswrapper[4662]: I1208 09:18:07.788945 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-557kf" event={"ID":"c761fbd0-5303-4e6c-bd2c-509135843e80","Type":"ContainerDied","Data":"8b0a628235e5137ca72ecf5031bb2da12825a6349740790a3430477f9076e8a5"} Dec 08 09:18:07 crc kubenswrapper[4662]: I1208 09:18:07.794209 4662 generic.go:334] "Generic (PLEG): container finished" podID="5edca319-b627-48cd-b8ed-164446cefc08" containerID="9a07db42306f58f2817fcf7f9e5a0f1ea827e38ae494ec8f8fc8ad35cc9d83af" exitCode=0 Dec 08 09:18:07 crc kubenswrapper[4662]: I1208 09:18:07.794259 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfnf8" event={"ID":"5edca319-b627-48cd-b8ed-164446cefc08","Type":"ContainerDied","Data":"9a07db42306f58f2817fcf7f9e5a0f1ea827e38ae494ec8f8fc8ad35cc9d83af"} Dec 08 09:18:08 crc kubenswrapper[4662]: I1208 09:18:08.846310 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfnf8" event={"ID":"5edca319-b627-48cd-b8ed-164446cefc08","Type":"ContainerStarted","Data":"d1530e6302a7aae03ced03fda1dc49738ca5be1be805010d7cb7590cb94cd5ad"} Dec 08 09:18:08 crc kubenswrapper[4662]: I1208 09:18:08.848991 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vsqx" event={"ID":"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856","Type":"ContainerStarted","Data":"943f4d611732fb49e2abbd20e551d3b6f054c9a5eff046bb75ed96e149b821df"} Dec 08 09:18:08 crc kubenswrapper[4662]: I1208 09:18:08.858191 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvfv4" event={"ID":"2cd2fc20-49c6-49c1-8111-a3ea33c114d1","Type":"ContainerStarted","Data":"02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515"} Dec 08 09:18:08 crc kubenswrapper[4662]: I1208 09:18:08.859854 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-557kf" event={"ID":"c761fbd0-5303-4e6c-bd2c-509135843e80","Type":"ContainerStarted","Data":"e693b7b6c2f610dca49afd126dd6e5b13383a43c5ef80b6d803986ba88aa325f"} Dec 08 09:18:08 crc kubenswrapper[4662]: I1208 09:18:08.872296 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pfnf8" podStartSLOduration=4.239554608 podStartE2EDuration="53.872276509s" podCreationTimestamp="2025-12-08 09:17:15 +0000 UTC" firstStartedPulling="2025-12-08 09:17:18.539807635 +0000 UTC m=+162.108835625" lastFinishedPulling="2025-12-08 09:18:08.172529536 +0000 UTC m=+211.741557526" observedRunningTime="2025-12-08 09:18:08.871782886 +0000 UTC m=+212.440810876" watchObservedRunningTime="2025-12-08 09:18:08.872276509 +0000 UTC m=+212.441304499" Dec 08 09:18:08 crc kubenswrapper[4662]: I1208 09:18:08.897034 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-557kf" podStartSLOduration=3.036594214 podStartE2EDuration="55.897012886s" podCreationTimestamp="2025-12-08 09:17:13 +0000 UTC" firstStartedPulling="2025-12-08 09:17:15.378043448 +0000 UTC m=+158.947071438" lastFinishedPulling="2025-12-08 09:18:08.23846212 +0000 UTC m=+211.807490110" observedRunningTime="2025-12-08 09:18:08.896108011 +0000 UTC m=+212.465136001" watchObservedRunningTime="2025-12-08 09:18:08.897012886 +0000 UTC m=+212.466040866" Dec 08 09:18:09 crc kubenswrapper[4662]: I1208 09:18:09.871368 4662 generic.go:334] "Generic (PLEG): container finished" podID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerID="02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515" exitCode=0 Dec 08 09:18:09 crc kubenswrapper[4662]: I1208 09:18:09.871513 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvfv4" event={"ID":"2cd2fc20-49c6-49c1-8111-a3ea33c114d1","Type":"ContainerDied","Data":"02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515"} Dec 08 09:18:09 crc kubenswrapper[4662]: I1208 09:18:09.876331 4662 generic.go:334] "Generic (PLEG): container finished" podID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerID="943f4d611732fb49e2abbd20e551d3b6f054c9a5eff046bb75ed96e149b821df" exitCode=0 Dec 08 09:18:09 crc kubenswrapper[4662]: I1208 09:18:09.876359 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vsqx" event={"ID":"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856","Type":"ContainerDied","Data":"943f4d611732fb49e2abbd20e551d3b6f054c9a5eff046bb75ed96e149b821df"} Dec 08 09:18:10 crc kubenswrapper[4662]: I1208 09:18:10.883900 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vsqx" event={"ID":"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856","Type":"ContainerStarted","Data":"13b65d6fac8422fd2c09fd0e0534aa6b158f8dca95d719c11fcb9cf8da97739e"} Dec 08 09:18:10 crc kubenswrapper[4662]: I1208 09:18:10.885954 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqjvm" event={"ID":"ad586273-365b-4e31-bcc4-d78731f84d8c","Type":"ContainerStarted","Data":"df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366"} Dec 08 09:18:10 crc kubenswrapper[4662]: I1208 09:18:10.905295 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4vsqx" podStartSLOduration=3.529608972 podStartE2EDuration="58.905275886s" podCreationTimestamp="2025-12-08 09:17:12 +0000 UTC" firstStartedPulling="2025-12-08 09:17:14.302026989 +0000 UTC m=+157.871054979" lastFinishedPulling="2025-12-08 09:18:09.677693903 +0000 UTC m=+213.246721893" observedRunningTime="2025-12-08 09:18:10.900375262 +0000 UTC m=+214.469403262" watchObservedRunningTime="2025-12-08 09:18:10.905275886 +0000 UTC m=+214.474303876" Dec 08 09:18:11 crc kubenswrapper[4662]: I1208 09:18:11.903484 4662 generic.go:334] "Generic (PLEG): container finished" podID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerID="df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366" exitCode=0 Dec 08 09:18:11 crc kubenswrapper[4662]: I1208 09:18:11.903540 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqjvm" event={"ID":"ad586273-365b-4e31-bcc4-d78731f84d8c","Type":"ContainerDied","Data":"df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366"} Dec 08 09:18:12 crc kubenswrapper[4662]: I1208 09:18:12.759024 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:18:12 crc kubenswrapper[4662]: I1208 09:18:12.759075 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:18:12 crc kubenswrapper[4662]: I1208 09:18:12.804982 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:18:13 crc kubenswrapper[4662]: I1208 09:18:13.366052 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:18:13 crc kubenswrapper[4662]: I1208 09:18:13.366087 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:18:13 crc kubenswrapper[4662]: I1208 09:18:13.426050 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:18:13 crc kubenswrapper[4662]: I1208 09:18:13.960160 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:18:16 crc kubenswrapper[4662]: I1208 09:18:16.154164 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:18:16 crc kubenswrapper[4662]: I1208 09:18:16.154562 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:18:16 crc kubenswrapper[4662]: I1208 09:18:16.192937 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:18:16 crc kubenswrapper[4662]: I1208 09:18:16.935500 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-557kf"] Dec 08 09:18:16 crc kubenswrapper[4662]: I1208 09:18:16.935710 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-557kf" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="registry-server" containerID="cri-o://e693b7b6c2f610dca49afd126dd6e5b13383a43c5ef80b6d803986ba88aa325f" gracePeriod=2 Dec 08 09:18:16 crc kubenswrapper[4662]: I1208 09:18:16.969523 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:18:17 crc kubenswrapper[4662]: I1208 09:18:17.942357 4662 generic.go:334] "Generic (PLEG): container finished" podID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerID="e693b7b6c2f610dca49afd126dd6e5b13383a43c5ef80b6d803986ba88aa325f" exitCode=0 Dec 08 09:18:17 crc kubenswrapper[4662]: I1208 09:18:17.942422 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-557kf" event={"ID":"c761fbd0-5303-4e6c-bd2c-509135843e80","Type":"ContainerDied","Data":"e693b7b6c2f610dca49afd126dd6e5b13383a43c5ef80b6d803986ba88aa325f"} Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.582438 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.671450 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-catalog-content\") pod \"c761fbd0-5303-4e6c-bd2c-509135843e80\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.671526 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nf5wf\" (UniqueName: \"kubernetes.io/projected/c761fbd0-5303-4e6c-bd2c-509135843e80-kube-api-access-nf5wf\") pod \"c761fbd0-5303-4e6c-bd2c-509135843e80\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.671615 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-utilities\") pod \"c761fbd0-5303-4e6c-bd2c-509135843e80\" (UID: \"c761fbd0-5303-4e6c-bd2c-509135843e80\") " Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.673858 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-utilities" (OuterVolumeSpecName: "utilities") pod "c761fbd0-5303-4e6c-bd2c-509135843e80" (UID: "c761fbd0-5303-4e6c-bd2c-509135843e80"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.679957 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c761fbd0-5303-4e6c-bd2c-509135843e80-kube-api-access-nf5wf" (OuterVolumeSpecName: "kube-api-access-nf5wf") pod "c761fbd0-5303-4e6c-bd2c-509135843e80" (UID: "c761fbd0-5303-4e6c-bd2c-509135843e80"). InnerVolumeSpecName "kube-api-access-nf5wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.731420 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c761fbd0-5303-4e6c-bd2c-509135843e80" (UID: "c761fbd0-5303-4e6c-bd2c-509135843e80"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.773375 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.773420 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nf5wf\" (UniqueName: \"kubernetes.io/projected/c761fbd0-5303-4e6c-bd2c-509135843e80-kube-api-access-nf5wf\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.773435 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c761fbd0-5303-4e6c-bd2c-509135843e80-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.948625 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqjvm" event={"ID":"ad586273-365b-4e31-bcc4-d78731f84d8c","Type":"ContainerStarted","Data":"c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa"} Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.950648 4662 generic.go:334] "Generic (PLEG): container finished" podID="286d64f0-03e6-439e-80d6-be053d25d93b" containerID="b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e" exitCode=0 Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.950699 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzx85" event={"ID":"286d64f0-03e6-439e-80d6-be053d25d93b","Type":"ContainerDied","Data":"b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e"} Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.955877 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvfv4" event={"ID":"2cd2fc20-49c6-49c1-8111-a3ea33c114d1","Type":"ContainerStarted","Data":"4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44"} Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.959145 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-557kf" event={"ID":"c761fbd0-5303-4e6c-bd2c-509135843e80","Type":"ContainerDied","Data":"7f6525c761d0674b1224c6e7c6a1c5facd0eeaee43866944848519b4399e2894"} Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.959191 4662 scope.go:117] "RemoveContainer" containerID="e693b7b6c2f610dca49afd126dd6e5b13383a43c5ef80b6d803986ba88aa325f" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.959341 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-557kf" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.968476 4662 generic.go:334] "Generic (PLEG): container finished" podID="d746d881-44be-4a9e-8b0f-f619328d1610" containerID="3043c07d0abc3c4988148dd95413471750d31c3e50b2a9f1b4238e034fc80c64" exitCode=0 Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.968518 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64xq9" event={"ID":"d746d881-44be-4a9e-8b0f-f619328d1610","Type":"ContainerDied","Data":"3043c07d0abc3c4988148dd95413471750d31c3e50b2a9f1b4238e034fc80c64"} Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.975969 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nqjvm" podStartSLOduration=3.480581835 podStartE2EDuration="1m2.975951256s" podCreationTimestamp="2025-12-08 09:17:16 +0000 UTC" firstStartedPulling="2025-12-08 09:17:18.519778201 +0000 UTC m=+162.088806191" lastFinishedPulling="2025-12-08 09:18:18.015147632 +0000 UTC m=+221.584175612" observedRunningTime="2025-12-08 09:18:18.973437818 +0000 UTC m=+222.542465808" watchObservedRunningTime="2025-12-08 09:18:18.975951256 +0000 UTC m=+222.544979256" Dec 08 09:18:18 crc kubenswrapper[4662]: I1208 09:18:18.994174 4662 scope.go:117] "RemoveContainer" containerID="8b0a628235e5137ca72ecf5031bb2da12825a6349740790a3430477f9076e8a5" Dec 08 09:18:19 crc kubenswrapper[4662]: I1208 09:18:19.018788 4662 scope.go:117] "RemoveContainer" containerID="6aabb8ca8de6abf3b5d7ffe5a13d446fc4985a2c0390bc70f95ddb7412e88dab" Dec 08 09:18:19 crc kubenswrapper[4662]: I1208 09:18:19.042204 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvfv4" podStartSLOduration=5.544637062 podStartE2EDuration="1m5.042186528s" podCreationTimestamp="2025-12-08 09:17:14 +0000 UTC" firstStartedPulling="2025-12-08 09:17:18.544156173 +0000 UTC m=+162.113184163" lastFinishedPulling="2025-12-08 09:18:18.041705639 +0000 UTC m=+221.610733629" observedRunningTime="2025-12-08 09:18:19.039372581 +0000 UTC m=+222.608400581" watchObservedRunningTime="2025-12-08 09:18:19.042186528 +0000 UTC m=+222.611214518" Dec 08 09:18:19 crc kubenswrapper[4662]: I1208 09:18:19.057424 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-557kf"] Dec 08 09:18:19 crc kubenswrapper[4662]: I1208 09:18:19.068232 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-557kf"] Dec 08 09:18:20 crc kubenswrapper[4662]: I1208 09:18:20.703695 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" path="/var/lib/kubelet/pods/c761fbd0-5303-4e6c-bd2c-509135843e80/volumes" Dec 08 09:18:22 crc kubenswrapper[4662]: I1208 09:18:22.001260 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzx85" event={"ID":"286d64f0-03e6-439e-80d6-be053d25d93b","Type":"ContainerStarted","Data":"4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b"} Dec 08 09:18:22 crc kubenswrapper[4662]: I1208 09:18:22.003258 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64xq9" event={"ID":"d746d881-44be-4a9e-8b0f-f619328d1610","Type":"ContainerStarted","Data":"4c93f0478b003ab7608804d556926c6caf0c8630d0812360fbc83cc7087a96e1"} Dec 08 09:18:22 crc kubenswrapper[4662]: I1208 09:18:22.023185 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jzx85" podStartSLOduration=4.289182205 podStartE2EDuration="1m8.02316781s" podCreationTimestamp="2025-12-08 09:17:14 +0000 UTC" firstStartedPulling="2025-12-08 09:17:17.414033674 +0000 UTC m=+160.983061664" lastFinishedPulling="2025-12-08 09:18:21.148019279 +0000 UTC m=+224.717047269" observedRunningTime="2025-12-08 09:18:22.01990051 +0000 UTC m=+225.588928510" watchObservedRunningTime="2025-12-08 09:18:22.02316781 +0000 UTC m=+225.592195810" Dec 08 09:18:22 crc kubenswrapper[4662]: I1208 09:18:22.806401 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:18:22 crc kubenswrapper[4662]: I1208 09:18:22.827190 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-64xq9" podStartSLOduration=4.27275862 podStartE2EDuration="1m10.827168905s" podCreationTimestamp="2025-12-08 09:17:12 +0000 UTC" firstStartedPulling="2025-12-08 09:17:14.366185172 +0000 UTC m=+157.935213162" lastFinishedPulling="2025-12-08 09:18:20.920595457 +0000 UTC m=+224.489623447" observedRunningTime="2025-12-08 09:18:22.049891661 +0000 UTC m=+225.618919651" watchObservedRunningTime="2025-12-08 09:18:22.827168905 +0000 UTC m=+226.396196895" Dec 08 09:18:23 crc kubenswrapper[4662]: I1208 09:18:23.157441 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:18:23 crc kubenswrapper[4662]: I1208 09:18:23.157717 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:18:24 crc kubenswrapper[4662]: I1208 09:18:24.205507 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-64xq9" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="registry-server" probeResult="failure" output=< Dec 08 09:18:24 crc kubenswrapper[4662]: timeout: failed to connect service ":50051" within 1s Dec 08 09:18:24 crc kubenswrapper[4662]: > Dec 08 09:18:24 crc kubenswrapper[4662]: I1208 09:18:24.788712 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:18:24 crc kubenswrapper[4662]: I1208 09:18:24.788887 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:18:24 crc kubenswrapper[4662]: I1208 09:18:24.832641 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:18:25 crc kubenswrapper[4662]: I1208 09:18:25.161198 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:18:25 crc kubenswrapper[4662]: I1208 09:18:25.161349 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:18:25 crc kubenswrapper[4662]: I1208 09:18:25.205199 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:18:26 crc kubenswrapper[4662]: I1208 09:18:26.061579 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:18:26 crc kubenswrapper[4662]: I1208 09:18:26.561071 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:18:26 crc kubenswrapper[4662]: I1208 09:18:26.561118 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:18:26 crc kubenswrapper[4662]: I1208 09:18:26.596997 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:18:27 crc kubenswrapper[4662]: I1208 09:18:27.071211 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:18:27 crc kubenswrapper[4662]: I1208 09:18:27.934380 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvfv4"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.200264 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-5xdtn"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.514366 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kv9l"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.514884 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5kv9l" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="registry-server" containerID="cri-o://f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0" gracePeriod=30 Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.524879 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4vsqx"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.525123 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4vsqx" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="registry-server" containerID="cri-o://13b65d6fac8422fd2c09fd0e0534aa6b158f8dca95d719c11fcb9cf8da97739e" gracePeriod=30 Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.537343 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-64xq9"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.537602 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-64xq9" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="registry-server" containerID="cri-o://4c93f0478b003ab7608804d556926c6caf0c8630d0812360fbc83cc7087a96e1" gracePeriod=30 Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.548556 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z6pb2"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.549033 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" podUID="20d06f62-0413-4b18-9e01-05932b0a663b" containerName="marketplace-operator" containerID="cri-o://6632918a8e2281c149ff9533208260b6ee78ca5d2bd8b60f3f18623c8d7e74cd" gracePeriod=30 Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.568380 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzx85"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.568675 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jzx85" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="registry-server" containerID="cri-o://4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" gracePeriod=30 Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.572918 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.577860 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.579715 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqjvm"] Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.585994 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" cmd=["grpc_health_probe","-addr=:50051"] Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.586066 4662 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-jzx85" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="registry-server" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.586698 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k8mdp"] Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.586914 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="extract-content" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.586927 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="extract-content" Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.586940 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="registry-server" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.586946 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="registry-server" Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.586961 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6ec3fd-846e-4471-904e-2a1731ca8167" containerName="pruner" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.586967 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6ec3fd-846e-4471-904e-2a1731ca8167" containerName="pruner" Dec 08 09:18:28 crc kubenswrapper[4662]: E1208 09:18:28.586977 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="extract-utilities" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.586983 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="extract-utilities" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.587072 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c761fbd0-5303-4e6c-bd2c-509135843e80" containerName="registry-server" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.587083 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6ec3fd-846e-4471-904e-2a1731ca8167" containerName="pruner" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.587472 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.597310 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfnf8"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.597530 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pfnf8" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="registry-server" containerID="cri-o://d1530e6302a7aae03ced03fda1dc49738ca5be1be805010d7cb7590cb94cd5ad" gracePeriod=30 Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.612003 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k8mdp"] Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.687932 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8de480c6-7855-45e8-91ad-574e204414ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.687996 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jszp2\" (UniqueName: \"kubernetes.io/projected/8de480c6-7855-45e8-91ad-574e204414ae-kube-api-access-jszp2\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.688132 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8de480c6-7855-45e8-91ad-574e204414ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.789476 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jszp2\" (UniqueName: \"kubernetes.io/projected/8de480c6-7855-45e8-91ad-574e204414ae-kube-api-access-jszp2\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.789536 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8de480c6-7855-45e8-91ad-574e204414ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.789597 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8de480c6-7855-45e8-91ad-574e204414ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.791690 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8de480c6-7855-45e8-91ad-574e204414ae-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.800443 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8de480c6-7855-45e8-91ad-574e204414ae-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.807288 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jszp2\" (UniqueName: \"kubernetes.io/projected/8de480c6-7855-45e8-91ad-574e204414ae-kube-api-access-jszp2\") pod \"marketplace-operator-79b997595-k8mdp\" (UID: \"8de480c6-7855-45e8-91ad-574e204414ae\") " pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:28 crc kubenswrapper[4662]: I1208 09:18:28.905523 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.012423 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.018366 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.085146 4662 generic.go:334] "Generic (PLEG): container finished" podID="286d64f0-03e6-439e-80d6-be053d25d93b" containerID="4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" exitCode=0 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.085241 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzx85" event={"ID":"286d64f0-03e6-439e-80d6-be053d25d93b","Type":"ContainerDied","Data":"4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.085527 4662 scope.go:117] "RemoveContainer" containerID="4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.085462 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jzx85" event={"ID":"286d64f0-03e6-439e-80d6-be053d25d93b","Type":"ContainerDied","Data":"ba92682ba9f57d02733e253196b244eaf1fdf48b550157630d6ceb832b6e9027"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.085803 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jzx85" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.094648 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.095831 4662 generic.go:334] "Generic (PLEG): container finished" podID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerID="f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0" exitCode=0 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.095892 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5kv9l" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.095934 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kv9l" event={"ID":"fd5d97ce-9155-4ca4-856a-99db2e1b46a6","Type":"ContainerDied","Data":"f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.095972 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5kv9l" event={"ID":"fd5d97ce-9155-4ca4-856a-99db2e1b46a6","Type":"ContainerDied","Data":"a504a2642af0db023a2286fbeded2f9ebb736c0c2293b42ed2fa3b2173f916b8"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.099203 4662 generic.go:334] "Generic (PLEG): container finished" podID="d746d881-44be-4a9e-8b0f-f619328d1610" containerID="4c93f0478b003ab7608804d556926c6caf0c8630d0812360fbc83cc7087a96e1" exitCode=0 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.099273 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64xq9" event={"ID":"d746d881-44be-4a9e-8b0f-f619328d1610","Type":"ContainerDied","Data":"4c93f0478b003ab7608804d556926c6caf0c8630d0812360fbc83cc7087a96e1"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.107955 4662 generic.go:334] "Generic (PLEG): container finished" podID="20d06f62-0413-4b18-9e01-05932b0a663b" containerID="6632918a8e2281c149ff9533208260b6ee78ca5d2bd8b60f3f18623c8d7e74cd" exitCode=0 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.108011 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" event={"ID":"20d06f62-0413-4b18-9e01-05932b0a663b","Type":"ContainerDied","Data":"6632918a8e2281c149ff9533208260b6ee78ca5d2bd8b60f3f18623c8d7e74cd"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.114942 4662 generic.go:334] "Generic (PLEG): container finished" podID="5edca319-b627-48cd-b8ed-164446cefc08" containerID="d1530e6302a7aae03ced03fda1dc49738ca5be1be805010d7cb7590cb94cd5ad" exitCode=0 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.114996 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfnf8" event={"ID":"5edca319-b627-48cd-b8ed-164446cefc08","Type":"ContainerDied","Data":"d1530e6302a7aae03ced03fda1dc49738ca5be1be805010d7cb7590cb94cd5ad"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.126243 4662 scope.go:117] "RemoveContainer" containerID="b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.132221 4662 generic.go:334] "Generic (PLEG): container finished" podID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerID="13b65d6fac8422fd2c09fd0e0534aa6b158f8dca95d719c11fcb9cf8da97739e" exitCode=0 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.132376 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4vsqx" event={"ID":"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856","Type":"ContainerDied","Data":"13b65d6fac8422fd2c09fd0e0534aa6b158f8dca95d719c11fcb9cf8da97739e"} Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.132441 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nqjvm" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="registry-server" containerID="cri-o://c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa" gracePeriod=30 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.132583 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvfv4" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="registry-server" containerID="cri-o://4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44" gracePeriod=2 Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.132866 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4vsqx" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.177065 4662 scope.go:117] "RemoveContainer" containerID="016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194041 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-catalog-content\") pod \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194104 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-utilities\") pod \"286d64f0-03e6-439e-80d6-be053d25d93b\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194146 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-utilities\") pod \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194174 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-catalog-content\") pod \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194193 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-catalog-content\") pod \"286d64f0-03e6-439e-80d6-be053d25d93b\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194224 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rggd6\" (UniqueName: \"kubernetes.io/projected/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-kube-api-access-rggd6\") pod \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\" (UID: \"fd5d97ce-9155-4ca4-856a-99db2e1b46a6\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194255 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-utilities\") pod \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194302 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k62l\" (UniqueName: \"kubernetes.io/projected/286d64f0-03e6-439e-80d6-be053d25d93b-kube-api-access-6k62l\") pod \"286d64f0-03e6-439e-80d6-be053d25d93b\" (UID: \"286d64f0-03e6-439e-80d6-be053d25d93b\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.194330 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnnzk\" (UniqueName: \"kubernetes.io/projected/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-kube-api-access-wnnzk\") pod \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\" (UID: \"51f862c6-f9e4-4c88-b0fa-5b4e5acb7856\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.202979 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-utilities" (OuterVolumeSpecName: "utilities") pod "fd5d97ce-9155-4ca4-856a-99db2e1b46a6" (UID: "fd5d97ce-9155-4ca4-856a-99db2e1b46a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.204281 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-utilities" (OuterVolumeSpecName: "utilities") pod "286d64f0-03e6-439e-80d6-be053d25d93b" (UID: "286d64f0-03e6-439e-80d6-be053d25d93b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.204854 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-kube-api-access-wnnzk" (OuterVolumeSpecName: "kube-api-access-wnnzk") pod "51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" (UID: "51f862c6-f9e4-4c88-b0fa-5b4e5acb7856"). InnerVolumeSpecName "kube-api-access-wnnzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.205228 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-utilities" (OuterVolumeSpecName: "utilities") pod "51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" (UID: "51f862c6-f9e4-4c88-b0fa-5b4e5acb7856"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.220878 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/286d64f0-03e6-439e-80d6-be053d25d93b-kube-api-access-6k62l" (OuterVolumeSpecName: "kube-api-access-6k62l") pod "286d64f0-03e6-439e-80d6-be053d25d93b" (UID: "286d64f0-03e6-439e-80d6-be053d25d93b"). InnerVolumeSpecName "kube-api-access-6k62l". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.223105 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-kube-api-access-rggd6" (OuterVolumeSpecName: "kube-api-access-rggd6") pod "fd5d97ce-9155-4ca4-856a-99db2e1b46a6" (UID: "fd5d97ce-9155-4ca4-856a-99db2e1b46a6"). InnerVolumeSpecName "kube-api-access-rggd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.271343 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.279025 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" (UID: "51f862c6-f9e4-4c88-b0fa-5b4e5acb7856"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.285244 4662 scope.go:117] "RemoveContainer" containerID="4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296155 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6k62l\" (UniqueName: \"kubernetes.io/projected/286d64f0-03e6-439e-80d6-be053d25d93b-kube-api-access-6k62l\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296219 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnnzk\" (UniqueName: \"kubernetes.io/projected/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-kube-api-access-wnnzk\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296228 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296237 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296246 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296254 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rggd6\" (UniqueName: \"kubernetes.io/projected/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-kube-api-access-rggd6\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296261 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: E1208 09:18:29.296345 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b\": container with ID starting with 4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b not found: ID does not exist" containerID="4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296371 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b"} err="failed to get container status \"4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b\": rpc error: code = NotFound desc = could not find container \"4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b\": container with ID starting with 4fd0866d0fa5e143e1a157f27af2c1a3dec8719c93f33402fe121e0f4f90482b not found: ID does not exist" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.296394 4662 scope.go:117] "RemoveContainer" containerID="b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e" Dec 08 09:18:29 crc kubenswrapper[4662]: E1208 09:18:29.299176 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e\": container with ID starting with b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e not found: ID does not exist" containerID="b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.299249 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e"} err="failed to get container status \"b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e\": rpc error: code = NotFound desc = could not find container \"b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e\": container with ID starting with b0d9adb373be5b6ab58e6029ff8a60b266f82ee5d1844d5eba0141c944141e1e not found: ID does not exist" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.299272 4662 scope.go:117] "RemoveContainer" containerID="016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47" Dec 08 09:18:29 crc kubenswrapper[4662]: E1208 09:18:29.299609 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47\": container with ID starting with 016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47 not found: ID does not exist" containerID="016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.299631 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47"} err="failed to get container status \"016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47\": rpc error: code = NotFound desc = could not find container \"016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47\": container with ID starting with 016abcebcbc4de90bd45ae1286ce16cbe369e309b8777c133d1302bd1b21ef47 not found: ID does not exist" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.299646 4662 scope.go:117] "RemoveContainer" containerID="f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.311885 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "286d64f0-03e6-439e-80d6-be053d25d93b" (UID: "286d64f0-03e6-439e-80d6-be053d25d93b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.323313 4662 scope.go:117] "RemoveContainer" containerID="2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.336981 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqjvm"] Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.349102 4662 scope.go:117] "RemoveContainer" containerID="e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.349928 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fd5d97ce-9155-4ca4-856a-99db2e1b46a6" (UID: "fd5d97ce-9155-4ca4-856a-99db2e1b46a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.370535 4662 scope.go:117] "RemoveContainer" containerID="f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0" Dec 08 09:18:29 crc kubenswrapper[4662]: E1208 09:18:29.370942 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0\": container with ID starting with f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0 not found: ID does not exist" containerID="f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.370974 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0"} err="failed to get container status \"f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0\": rpc error: code = NotFound desc = could not find container \"f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0\": container with ID starting with f277bde48f36a630ccfb2092004213f4689d4b23764ef87ec8a66b888139bcd0 not found: ID does not exist" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.371001 4662 scope.go:117] "RemoveContainer" containerID="2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8" Dec 08 09:18:29 crc kubenswrapper[4662]: E1208 09:18:29.371239 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8\": container with ID starting with 2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8 not found: ID does not exist" containerID="2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.371265 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8"} err="failed to get container status \"2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8\": rpc error: code = NotFound desc = could not find container \"2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8\": container with ID starting with 2329f6f58413dfdd72ceade42c72d0adfc64ea1c21354ab8ca0ec7a1ca0c6ea8 not found: ID does not exist" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.371282 4662 scope.go:117] "RemoveContainer" containerID="e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8" Dec 08 09:18:29 crc kubenswrapper[4662]: E1208 09:18:29.371495 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8\": container with ID starting with e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8 not found: ID does not exist" containerID="e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.371520 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8"} err="failed to get container status \"e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8\": rpc error: code = NotFound desc = could not find container \"e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8\": container with ID starting with e1c52b92be62abb905cf0c82f41a670fa781cb283faeec252e98fe93f7e3adc8 not found: ID does not exist" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.371535 4662 scope.go:117] "RemoveContainer" containerID="13b65d6fac8422fd2c09fd0e0534aa6b158f8dca95d719c11fcb9cf8da97739e" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.391911 4662 scope.go:117] "RemoveContainer" containerID="943f4d611732fb49e2abbd20e551d3b6f054c9a5eff046bb75ed96e149b821df" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.397838 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-catalog-content\") pod \"5edca319-b627-48cd-b8ed-164446cefc08\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.397889 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/5edca319-b627-48cd-b8ed-164446cefc08-kube-api-access-pkz5n\") pod \"5edca319-b627-48cd-b8ed-164446cefc08\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.398003 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-utilities\") pod \"5edca319-b627-48cd-b8ed-164446cefc08\" (UID: \"5edca319-b627-48cd-b8ed-164446cefc08\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.398255 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fd5d97ce-9155-4ca4-856a-99db2e1b46a6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.398291 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/286d64f0-03e6-439e-80d6-be053d25d93b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.399241 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-utilities" (OuterVolumeSpecName: "utilities") pod "5edca319-b627-48cd-b8ed-164446cefc08" (UID: "5edca319-b627-48cd-b8ed-164446cefc08"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.403116 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5edca319-b627-48cd-b8ed-164446cefc08-kube-api-access-pkz5n" (OuterVolumeSpecName: "kube-api-access-pkz5n") pod "5edca319-b627-48cd-b8ed-164446cefc08" (UID: "5edca319-b627-48cd-b8ed-164446cefc08"). InnerVolumeSpecName "kube-api-access-pkz5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.414509 4662 scope.go:117] "RemoveContainer" containerID="14bc472d321038eaec1641fca0d1eabcd859a8cf7d5d911990e7a73a1544a8ba" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.438481 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5kv9l"] Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.467523 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5kv9l"] Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.476773 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzx85"] Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.482679 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jzx85"] Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.506924 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4vsqx"] Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.506938 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.506979 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/5edca319-b627-48cd-b8ed-164446cefc08-kube-api-access-pkz5n\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.511005 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4vsqx"] Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.514021 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.559537 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5edca319-b627-48cd-b8ed-164446cefc08" (UID: "5edca319-b627-48cd-b8ed-164446cefc08"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.561319 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k8mdp"] Dec 08 09:18:29 crc kubenswrapper[4662]: W1208 09:18:29.594516 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8de480c6_7855_45e8_91ad_574e204414ae.slice/crio-582ac580f96afc0401210e405f73ccc0fd410b5c96f8e2e1e8d403ed4db5093f WatchSource:0}: Error finding container 582ac580f96afc0401210e405f73ccc0fd410b5c96f8e2e1e8d403ed4db5093f: Status 404 returned error can't find the container with id 582ac580f96afc0401210e405f73ccc0fd410b5c96f8e2e1e8d403ed4db5093f Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.609496 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-trusted-ca\") pod \"20d06f62-0413-4b18-9e01-05932b0a663b\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.609544 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fg9c2\" (UniqueName: \"kubernetes.io/projected/20d06f62-0413-4b18-9e01-05932b0a663b-kube-api-access-fg9c2\") pod \"20d06f62-0413-4b18-9e01-05932b0a663b\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.609569 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-operator-metrics\") pod \"20d06f62-0413-4b18-9e01-05932b0a663b\" (UID: \"20d06f62-0413-4b18-9e01-05932b0a663b\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.610031 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5edca319-b627-48cd-b8ed-164446cefc08-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.618381 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20d06f62-0413-4b18-9e01-05932b0a663b-kube-api-access-fg9c2" (OuterVolumeSpecName: "kube-api-access-fg9c2") pod "20d06f62-0413-4b18-9e01-05932b0a663b" (UID: "20d06f62-0413-4b18-9e01-05932b0a663b"). InnerVolumeSpecName "kube-api-access-fg9c2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.620268 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "20d06f62-0413-4b18-9e01-05932b0a663b" (UID: "20d06f62-0413-4b18-9e01-05932b0a663b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.641860 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "20d06f62-0413-4b18-9e01-05932b0a663b" (UID: "20d06f62-0413-4b18-9e01-05932b0a663b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.710705 4662 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.710762 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fg9c2\" (UniqueName: \"kubernetes.io/projected/20d06f62-0413-4b18-9e01-05932b0a663b-kube-api-access-fg9c2\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.710774 4662 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/20d06f62-0413-4b18-9e01-05932b0a663b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.795067 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.863867 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.867620 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.912769 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-utilities\") pod \"d746d881-44be-4a9e-8b0f-f619328d1610\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.912826 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-catalog-content\") pod \"d746d881-44be-4a9e-8b0f-f619328d1610\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.912855 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rrkc4\" (UniqueName: \"kubernetes.io/projected/d746d881-44be-4a9e-8b0f-f619328d1610-kube-api-access-rrkc4\") pod \"d746d881-44be-4a9e-8b0f-f619328d1610\" (UID: \"d746d881-44be-4a9e-8b0f-f619328d1610\") " Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.916907 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-utilities" (OuterVolumeSpecName: "utilities") pod "d746d881-44be-4a9e-8b0f-f619328d1610" (UID: "d746d881-44be-4a9e-8b0f-f619328d1610"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:29 crc kubenswrapper[4662]: I1208 09:18:29.917328 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d746d881-44be-4a9e-8b0f-f619328d1610-kube-api-access-rrkc4" (OuterVolumeSpecName: "kube-api-access-rrkc4") pod "d746d881-44be-4a9e-8b0f-f619328d1610" (UID: "d746d881-44be-4a9e-8b0f-f619328d1610"). InnerVolumeSpecName "kube-api-access-rrkc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014057 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-catalog-content\") pod \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014114 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-utilities\") pod \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014155 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-utilities\") pod \"ad586273-365b-4e31-bcc4-d78731f84d8c\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014187 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5l9b\" (UniqueName: \"kubernetes.io/projected/ad586273-365b-4e31-bcc4-d78731f84d8c-kube-api-access-k5l9b\") pod \"ad586273-365b-4e31-bcc4-d78731f84d8c\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014214 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-catalog-content\") pod \"ad586273-365b-4e31-bcc4-d78731f84d8c\" (UID: \"ad586273-365b-4e31-bcc4-d78731f84d8c\") " Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014236 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjjfd\" (UniqueName: \"kubernetes.io/projected/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-kube-api-access-cjjfd\") pod \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\" (UID: \"2cd2fc20-49c6-49c1-8111-a3ea33c114d1\") " Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014437 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.014453 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rrkc4\" (UniqueName: \"kubernetes.io/projected/d746d881-44be-4a9e-8b0f-f619328d1610-kube-api-access-rrkc4\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.015775 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-utilities" (OuterVolumeSpecName: "utilities") pod "ad586273-365b-4e31-bcc4-d78731f84d8c" (UID: "ad586273-365b-4e31-bcc4-d78731f84d8c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.015843 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-utilities" (OuterVolumeSpecName: "utilities") pod "2cd2fc20-49c6-49c1-8111-a3ea33c114d1" (UID: "2cd2fc20-49c6-49c1-8111-a3ea33c114d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.017998 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad586273-365b-4e31-bcc4-d78731f84d8c-kube-api-access-k5l9b" (OuterVolumeSpecName: "kube-api-access-k5l9b") pod "ad586273-365b-4e31-bcc4-d78731f84d8c" (UID: "ad586273-365b-4e31-bcc4-d78731f84d8c"). InnerVolumeSpecName "kube-api-access-k5l9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.018288 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-kube-api-access-cjjfd" (OuterVolumeSpecName: "kube-api-access-cjjfd") pod "2cd2fc20-49c6-49c1-8111-a3ea33c114d1" (UID: "2cd2fc20-49c6-49c1-8111-a3ea33c114d1"). InnerVolumeSpecName "kube-api-access-cjjfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.032925 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2cd2fc20-49c6-49c1-8111-a3ea33c114d1" (UID: "2cd2fc20-49c6-49c1-8111-a3ea33c114d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.107621 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad586273-365b-4e31-bcc4-d78731f84d8c" (UID: "ad586273-365b-4e31-bcc4-d78731f84d8c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.115796 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.115844 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjjfd\" (UniqueName: \"kubernetes.io/projected/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-kube-api-access-cjjfd\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.115862 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.115874 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cd2fc20-49c6-49c1-8111-a3ea33c114d1-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.115885 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad586273-365b-4e31-bcc4-d78731f84d8c-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.115896 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5l9b\" (UniqueName: \"kubernetes.io/projected/ad586273-365b-4e31-bcc4-d78731f84d8c-kube-api-access-k5l9b\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.147209 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d746d881-44be-4a9e-8b0f-f619328d1610" (UID: "d746d881-44be-4a9e-8b0f-f619328d1610"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.153021 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-64xq9" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.153077 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-64xq9" event={"ID":"d746d881-44be-4a9e-8b0f-f619328d1610","Type":"ContainerDied","Data":"ef1f678754651bcbb6d77077763fa304b4c2818d3ac7699a4ccd39c3e651713f"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.153141 4662 scope.go:117] "RemoveContainer" containerID="4c93f0478b003ab7608804d556926c6caf0c8630d0812360fbc83cc7087a96e1" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.156639 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" event={"ID":"8de480c6-7855-45e8-91ad-574e204414ae","Type":"ContainerStarted","Data":"582ac580f96afc0401210e405f73ccc0fd410b5c96f8e2e1e8d403ed4db5093f"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.158940 4662 generic.go:334] "Generic (PLEG): container finished" podID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerID="c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa" exitCode=0 Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.159022 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqjvm" event={"ID":"ad586273-365b-4e31-bcc4-d78731f84d8c","Type":"ContainerDied","Data":"c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.159044 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nqjvm" event={"ID":"ad586273-365b-4e31-bcc4-d78731f84d8c","Type":"ContainerDied","Data":"4e4f27fdff350aa3df55a4ea2ff8e2f6fad2668c0bf6cd262bfe466b046dc192"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.159102 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nqjvm" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.172599 4662 scope.go:117] "RemoveContainer" containerID="3043c07d0abc3c4988148dd95413471750d31c3e50b2a9f1b4238e034fc80c64" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.173107 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pfnf8" event={"ID":"5edca319-b627-48cd-b8ed-164446cefc08","Type":"ContainerDied","Data":"0c8532fbf75eaeef890ded86481f0ae351fb14386b1399da01c45c00465d7c84"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.173134 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pfnf8" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.174924 4662 generic.go:334] "Generic (PLEG): container finished" podID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerID="4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44" exitCode=0 Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.174969 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvfv4" event={"ID":"2cd2fc20-49c6-49c1-8111-a3ea33c114d1","Type":"ContainerDied","Data":"4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.174985 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvfv4" event={"ID":"2cd2fc20-49c6-49c1-8111-a3ea33c114d1","Type":"ContainerDied","Data":"8bb8e0fb48e5d4d4868583fb33f284f49b67d2d09c9baf35a8941a5519318ca4"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.175062 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvfv4" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.192010 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.195411 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-z6pb2" event={"ID":"20d06f62-0413-4b18-9e01-05932b0a663b","Type":"ContainerDied","Data":"9b527b59e4bd4387224aade56f8f9d323075efd4606bb41af8f937f1bfa75b24"} Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.203095 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-64xq9"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.209967 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-64xq9"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.216606 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nqjvm"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.217436 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d746d881-44be-4a9e-8b0f-f619328d1610-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.220255 4662 scope.go:117] "RemoveContainer" containerID="091e513535c6ed9004759f79e11f1f56fe2db4f29bec27a1e6b39a88771bb93e" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.223623 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nqjvm"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.227636 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvfv4"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.233266 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvfv4"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.249609 4662 scope.go:117] "RemoveContainer" containerID="c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.255021 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z6pb2"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.269067 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-z6pb2"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.271606 4662 scope.go:117] "RemoveContainer" containerID="df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.291676 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pfnf8"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.303984 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pfnf8"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.307695 4662 scope.go:117] "RemoveContainer" containerID="0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.344975 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sc8bf"] Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345811 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345832 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345844 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345851 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345860 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345868 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345878 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345884 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345917 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345924 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345937 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345944 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345952 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345957 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345964 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345970 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345978 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345984 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.345992 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.345997 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346008 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20d06f62-0413-4b18-9e01-05932b0a663b" containerName="marketplace-operator" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346013 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="20d06f62-0413-4b18-9e01-05932b0a663b" containerName="marketplace-operator" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346021 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346027 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346036 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346042 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346048 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346054 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346068 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346074 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346081 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346086 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346093 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346099 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346108 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346114 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346121 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346127 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346134 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346141 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346150 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346155 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="extract-utilities" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.346162 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346168 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="extract-content" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346268 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346280 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346295 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="5edca319-b627-48cd-b8ed-164446cefc08" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.346304 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.347166 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="20d06f62-0413-4b18-9e01-05932b0a663b" containerName="marketplace-operator" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.347196 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.347206 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.347218 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" containerName="registry-server" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.348612 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.351969 4662 scope.go:117] "RemoveContainer" containerID="c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.352691 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa\": container with ID starting with c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa not found: ID does not exist" containerID="c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.352799 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa"} err="failed to get container status \"c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa\": rpc error: code = NotFound desc = could not find container \"c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa\": container with ID starting with c1cb5ae615346faa578517ce0815eb41c2b86abc7a9cfd0fd773e9666c0b71fa not found: ID does not exist" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.353365 4662 scope.go:117] "RemoveContainer" containerID="df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.353931 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366\": container with ID starting with df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366 not found: ID does not exist" containerID="df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.353972 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366"} err="failed to get container status \"df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366\": rpc error: code = NotFound desc = could not find container \"df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366\": container with ID starting with df3e20f5cd014bf2425f035016b85e73ef0f9a2e9ecc08279f636639a8e23366 not found: ID does not exist" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.354002 4662 scope.go:117] "RemoveContainer" containerID="0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.354376 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216\": container with ID starting with 0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216 not found: ID does not exist" containerID="0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.354407 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216"} err="failed to get container status \"0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216\": rpc error: code = NotFound desc = could not find container \"0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216\": container with ID starting with 0de48f2936c188b3152e26ada79f6d19886b1082f2c75e6b96e61f60aafe4216 not found: ID does not exist" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.354427 4662 scope.go:117] "RemoveContainer" containerID="d1530e6302a7aae03ced03fda1dc49738ca5be1be805010d7cb7590cb94cd5ad" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.354550 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.354858 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc8bf"] Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.385197 4662 scope.go:117] "RemoveContainer" containerID="9a07db42306f58f2817fcf7f9e5a0f1ea827e38ae494ec8f8fc8ad35cc9d83af" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.410963 4662 scope.go:117] "RemoveContainer" containerID="dfe364e9f2e3d3d05e64c9a8cd6293aaf748b7c509a6c80860996dcdc5ba6112" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.420249 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfttm\" (UniqueName: \"kubernetes.io/projected/7473f855-7fa1-44f3-8841-6041a045c35a-kube-api-access-wfttm\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.420312 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7473f855-7fa1-44f3-8841-6041a045c35a-catalog-content\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.420364 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7473f855-7fa1-44f3-8841-6041a045c35a-utilities\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.443938 4662 scope.go:117] "RemoveContainer" containerID="4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.464482 4662 scope.go:117] "RemoveContainer" containerID="02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.479029 4662 scope.go:117] "RemoveContainer" containerID="44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.490779 4662 scope.go:117] "RemoveContainer" containerID="4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.491269 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44\": container with ID starting with 4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44 not found: ID does not exist" containerID="4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.491311 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44"} err="failed to get container status \"4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44\": rpc error: code = NotFound desc = could not find container \"4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44\": container with ID starting with 4cdf3c6c66b71b9b07cc2109b04ff9a1b80963b501b8f5b25f9362f77583eb44 not found: ID does not exist" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.491340 4662 scope.go:117] "RemoveContainer" containerID="02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.491993 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515\": container with ID starting with 02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515 not found: ID does not exist" containerID="02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.492042 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515"} err="failed to get container status \"02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515\": rpc error: code = NotFound desc = could not find container \"02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515\": container with ID starting with 02119918dc4419975e8079761a01bf06d81c0c7b3d6ab9d041a2f901392a9515 not found: ID does not exist" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.492077 4662 scope.go:117] "RemoveContainer" containerID="44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb" Dec 08 09:18:30 crc kubenswrapper[4662]: E1208 09:18:30.492475 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb\": container with ID starting with 44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb not found: ID does not exist" containerID="44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.492502 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb"} err="failed to get container status \"44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb\": rpc error: code = NotFound desc = could not find container \"44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb\": container with ID starting with 44c58c4522fc763b031b6d51944751e74e27ba18b3587dcf1d31c2f53b10cfeb not found: ID does not exist" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.492517 4662 scope.go:117] "RemoveContainer" containerID="6632918a8e2281c149ff9533208260b6ee78ca5d2bd8b60f3f18623c8d7e74cd" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.521681 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7473f855-7fa1-44f3-8841-6041a045c35a-utilities\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.521767 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfttm\" (UniqueName: \"kubernetes.io/projected/7473f855-7fa1-44f3-8841-6041a045c35a-kube-api-access-wfttm\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.521799 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7473f855-7fa1-44f3-8841-6041a045c35a-catalog-content\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.522221 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7473f855-7fa1-44f3-8841-6041a045c35a-catalog-content\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.522382 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7473f855-7fa1-44f3-8841-6041a045c35a-utilities\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.537697 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfttm\" (UniqueName: \"kubernetes.io/projected/7473f855-7fa1-44f3-8841-6041a045c35a-kube-api-access-wfttm\") pod \"redhat-marketplace-sc8bf\" (UID: \"7473f855-7fa1-44f3-8841-6041a045c35a\") " pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.667095 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.706603 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20d06f62-0413-4b18-9e01-05932b0a663b" path="/var/lib/kubelet/pods/20d06f62-0413-4b18-9e01-05932b0a663b/volumes" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.707071 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="286d64f0-03e6-439e-80d6-be053d25d93b" path="/var/lib/kubelet/pods/286d64f0-03e6-439e-80d6-be053d25d93b/volumes" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.707597 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cd2fc20-49c6-49c1-8111-a3ea33c114d1" path="/var/lib/kubelet/pods/2cd2fc20-49c6-49c1-8111-a3ea33c114d1/volumes" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.708724 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51f862c6-f9e4-4c88-b0fa-5b4e5acb7856" path="/var/lib/kubelet/pods/51f862c6-f9e4-4c88-b0fa-5b4e5acb7856/volumes" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.709264 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5edca319-b627-48cd-b8ed-164446cefc08" path="/var/lib/kubelet/pods/5edca319-b627-48cd-b8ed-164446cefc08/volumes" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.710228 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad586273-365b-4e31-bcc4-d78731f84d8c" path="/var/lib/kubelet/pods/ad586273-365b-4e31-bcc4-d78731f84d8c/volumes" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.711054 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d746d881-44be-4a9e-8b0f-f619328d1610" path="/var/lib/kubelet/pods/d746d881-44be-4a9e-8b0f-f619328d1610/volumes" Dec 08 09:18:30 crc kubenswrapper[4662]: I1208 09:18:30.711584 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd5d97ce-9155-4ca4-856a-99db2e1b46a6" path="/var/lib/kubelet/pods/fd5d97ce-9155-4ca4-856a-99db2e1b46a6/volumes" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.056194 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sc8bf"] Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.198090 4662 generic.go:334] "Generic (PLEG): container finished" podID="7473f855-7fa1-44f3-8841-6041a045c35a" containerID="3401fac980648113a2513ebb3fd240ebd940235aff6aa0144b00cabaa3de1604" exitCode=0 Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.198444 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc8bf" event={"ID":"7473f855-7fa1-44f3-8841-6041a045c35a","Type":"ContainerDied","Data":"3401fac980648113a2513ebb3fd240ebd940235aff6aa0144b00cabaa3de1604"} Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.198473 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc8bf" event={"ID":"7473f855-7fa1-44f3-8841-6041a045c35a","Type":"ContainerStarted","Data":"7fe8c0fe2efcdaf6cb146f74fa1d0222baffb80d26c3ab0671b8fca6df793c78"} Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.200532 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" event={"ID":"8de480c6-7855-45e8-91ad-574e204414ae","Type":"ContainerStarted","Data":"3078a296ccc501f02109dadae0b0f9d10806a7aea4a5ddb8077575409e5a431d"} Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.201347 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.206398 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.235142 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k8mdp" podStartSLOduration=3.235123473 podStartE2EDuration="3.235123473s" podCreationTimestamp="2025-12-08 09:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:18:31.233194731 +0000 UTC m=+234.802222721" watchObservedRunningTime="2025-12-08 09:18:31.235123473 +0000 UTC m=+234.804151453" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.744078 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8lxnn"] Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.748995 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.751133 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.768118 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lxnn"] Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.837103 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-catalog-content\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.837190 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-utilities\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.837226 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djmv5\" (UniqueName: \"kubernetes.io/projected/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-kube-api-access-djmv5\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.938252 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djmv5\" (UniqueName: \"kubernetes.io/projected/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-kube-api-access-djmv5\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.938314 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-catalog-content\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.938396 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-utilities\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.939097 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-catalog-content\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.939215 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-utilities\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:31 crc kubenswrapper[4662]: I1208 09:18:31.971355 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djmv5\" (UniqueName: \"kubernetes.io/projected/ae8e99c2-28da-435a-b3bf-f3b7e71f783c-kube-api-access-djmv5\") pod \"certified-operators-8lxnn\" (UID: \"ae8e99c2-28da-435a-b3bf-f3b7e71f783c\") " pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.066815 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.231672 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc8bf" event={"ID":"7473f855-7fa1-44f3-8841-6041a045c35a","Type":"ContainerStarted","Data":"deb99e11d0dca61f5211be804782a89ff8b823990ebb29a4c962d3cd01dc02b6"} Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.474652 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8lxnn"] Dec 08 09:18:32 crc kubenswrapper[4662]: W1208 09:18:32.482947 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8e99c2_28da_435a_b3bf_f3b7e71f783c.slice/crio-eca2c30bb5547df4a4b44f768084fc0a5acd3e2577b7407fedf77a0c99d49a28 WatchSource:0}: Error finding container eca2c30bb5547df4a4b44f768084fc0a5acd3e2577b7407fedf77a0c99d49a28: Status 404 returned error can't find the container with id eca2c30bb5547df4a4b44f768084fc0a5acd3e2577b7407fedf77a0c99d49a28 Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.756841 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-smctn"] Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.759180 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.763667 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.766203 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smctn"] Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.855774 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55kpk\" (UniqueName: \"kubernetes.io/projected/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-kube-api-access-55kpk\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.855846 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-catalog-content\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.855884 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-utilities\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.957510 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55kpk\" (UniqueName: \"kubernetes.io/projected/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-kube-api-access-55kpk\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.957570 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-catalog-content\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.957608 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-utilities\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.958117 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-utilities\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.958648 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-catalog-content\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:32 crc kubenswrapper[4662]: I1208 09:18:32.977714 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55kpk\" (UniqueName: \"kubernetes.io/projected/09a4e6a7-2384-4ead-a4c7-396ff35e0bee-kube-api-access-55kpk\") pod \"community-operators-smctn\" (UID: \"09a4e6a7-2384-4ead-a4c7-396ff35e0bee\") " pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:33 crc kubenswrapper[4662]: I1208 09:18:33.082486 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:33 crc kubenswrapper[4662]: I1208 09:18:33.253135 4662 generic.go:334] "Generic (PLEG): container finished" podID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" containerID="a2ddd27e18a19173c86d4317647f37aadbb1e054ab8d13893a428e7e6ecbbf6f" exitCode=0 Dec 08 09:18:33 crc kubenswrapper[4662]: I1208 09:18:33.253278 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lxnn" event={"ID":"ae8e99c2-28da-435a-b3bf-f3b7e71f783c","Type":"ContainerDied","Data":"a2ddd27e18a19173c86d4317647f37aadbb1e054ab8d13893a428e7e6ecbbf6f"} Dec 08 09:18:33 crc kubenswrapper[4662]: I1208 09:18:33.253416 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lxnn" event={"ID":"ae8e99c2-28da-435a-b3bf-f3b7e71f783c","Type":"ContainerStarted","Data":"eca2c30bb5547df4a4b44f768084fc0a5acd3e2577b7407fedf77a0c99d49a28"} Dec 08 09:18:33 crc kubenswrapper[4662]: I1208 09:18:33.256085 4662 generic.go:334] "Generic (PLEG): container finished" podID="7473f855-7fa1-44f3-8841-6041a045c35a" containerID="deb99e11d0dca61f5211be804782a89ff8b823990ebb29a4c962d3cd01dc02b6" exitCode=0 Dec 08 09:18:33 crc kubenswrapper[4662]: I1208 09:18:33.257027 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc8bf" event={"ID":"7473f855-7fa1-44f3-8841-6041a045c35a","Type":"ContainerDied","Data":"deb99e11d0dca61f5211be804782a89ff8b823990ebb29a4c962d3cd01dc02b6"} Dec 08 09:18:33 crc kubenswrapper[4662]: I1208 09:18:33.401284 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-smctn"] Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.062882 4662 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.063699 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.064228 4662 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.064343 4662 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.064435 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3" gracePeriod=15 Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.068833 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069023 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.069039 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069046 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.069060 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069066 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.069080 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069086 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.069099 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069105 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.069117 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069123 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069339 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069352 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069364 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069370 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069468 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069476 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.069622 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.069629 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.070194 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782" gracePeriod=15 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.070242 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41" gracePeriod=15 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.070282 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15" gracePeriod=15 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.070794 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e" gracePeriod=15 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.113397 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174154 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174216 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174246 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174272 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174292 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174306 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174329 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.174344 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.261560 4662 generic.go:334] "Generic (PLEG): container finished" podID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" containerID="c784fde1db896288fcb693fc9c8344fa535dd0855eb535400dc1192df3a8fe0c" exitCode=0 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.261649 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smctn" event={"ID":"09a4e6a7-2384-4ead-a4c7-396ff35e0bee","Type":"ContainerDied","Data":"c784fde1db896288fcb693fc9c8344fa535dd0855eb535400dc1192df3a8fe0c"} Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.261681 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smctn" event={"ID":"09a4e6a7-2384-4ead-a4c7-396ff35e0bee","Type":"ContainerStarted","Data":"a0c5e0ba943e7f2f7723e8557f267e5f9d378565cad73d44196e38dda7ef9ac4"} Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.262320 4662 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.262576 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.262978 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: E1208 09:18:34.264570 4662 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-smctn.187f32df7462cf48 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-smctn,UID:09a4e6a7-2384-4ead-a4c7-396ff35e0bee,APIVersion:v1,ResourceVersion:29445,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 09:18:34.264252232 +0000 UTC m=+237.833280222,LastTimestamp:2025-12-08 09:18:34.264252232 +0000 UTC m=+237.833280222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.265238 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.267984 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.268786 4662 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e" exitCode=0 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.268838 4662 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41" exitCode=0 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.268851 4662 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15" exitCode=0 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.268860 4662 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782" exitCode=2 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.268894 4662 scope.go:117] "RemoveContainer" containerID="ead85b850bcb82f892ee8448b0240ddc52435a121ca134deb2c05d3b99317342" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275236 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275283 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275311 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275332 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275355 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275371 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275396 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275410 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275488 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275519 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275538 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275557 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275575 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275594 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275613 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.275630 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.278988 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sc8bf" event={"ID":"7473f855-7fa1-44f3-8841-6041a045c35a","Type":"ContainerStarted","Data":"9ad082ca16771a96a9919798fee13df64f190ec56b5e388027ba0128e7396b4a"} Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.279860 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.280033 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.280208 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.280358 4662 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.282606 4662 generic.go:334] "Generic (PLEG): container finished" podID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" containerID="75f5bc9eb8e468fe3d90dd78091355322cdc2347cdf1a84c96de45ca2d2bfb16" exitCode=0 Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.282647 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1","Type":"ContainerDied","Data":"75f5bc9eb8e468fe3d90dd78091355322cdc2347cdf1a84c96de45ca2d2bfb16"} Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.283161 4662 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.283359 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.283546 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.283732 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.283955 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:34 crc kubenswrapper[4662]: I1208 09:18:34.412091 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:18:34 crc kubenswrapper[4662]: W1208 09:18:34.428118 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-111c21f033053bb8a73dc1261e69a067c93910a751334f87ae46f6314d597dc4 WatchSource:0}: Error finding container 111c21f033053bb8a73dc1261e69a067c93910a751334f87ae46f6314d597dc4: Status 404 returned error can't find the container with id 111c21f033053bb8a73dc1261e69a067c93910a751334f87ae46f6314d597dc4 Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.293750 4662 generic.go:334] "Generic (PLEG): container finished" podID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" containerID="136d900e03422542b2fda180fc48d0be702a70145f87fcd72da62064b09fba8f" exitCode=0 Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.293844 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lxnn" event={"ID":"ae8e99c2-28da-435a-b3bf-f3b7e71f783c","Type":"ContainerDied","Data":"136d900e03422542b2fda180fc48d0be702a70145f87fcd72da62064b09fba8f"} Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.295817 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.296366 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.296720 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.297004 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.297255 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.298402 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.301426 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ac2f4529fbb7c4d8666f20901a05db06bd04eeb3030db8b15b0e4300f001cac1"} Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.301473 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"111c21f033053bb8a73dc1261e69a067c93910a751334f87ae46f6314d597dc4"} Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.301753 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.301966 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.302207 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.302521 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.302701 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.303878 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smctn" event={"ID":"09a4e6a7-2384-4ead-a4c7-396ff35e0bee","Type":"ContainerStarted","Data":"2cfbb777498524c93738ae2659b6e555b4ad8bedccbde7c6c9738c0547cee883"} Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.304465 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.304709 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.304993 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.305185 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.305400 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.559032 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.559903 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.560322 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.560545 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.560772 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.561009 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.590958 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kube-api-access\") pod \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.591117 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-var-lock\") pod \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.591142 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kubelet-dir\") pod \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\" (UID: \"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1\") " Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.591157 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-var-lock" (OuterVolumeSpecName: "var-lock") pod "73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" (UID: "73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.591279 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" (UID: "73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.591574 4662 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.591590 4662 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.604913 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" (UID: "73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:35 crc kubenswrapper[4662]: I1208 09:18:35.692356 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.310515 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1","Type":"ContainerDied","Data":"60880d44a5c97de2e7d93f3b055b5e25b8ef3f5d977acf174b2f3048b75c07f6"} Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.312227 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60880d44a5c97de2e7d93f3b055b5e25b8ef3f5d977acf174b2f3048b75c07f6" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.310570 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.318370 4662 generic.go:334] "Generic (PLEG): container finished" podID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" containerID="2cfbb777498524c93738ae2659b6e555b4ad8bedccbde7c6c9738c0547cee883" exitCode=0 Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.318431 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smctn" event={"ID":"09a4e6a7-2384-4ead-a4c7-396ff35e0bee","Type":"ContainerDied","Data":"2cfbb777498524c93738ae2659b6e555b4ad8bedccbde7c6c9738c0547cee883"} Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.319182 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.319431 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.319787 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.320029 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.320244 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.321648 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8lxnn" event={"ID":"ae8e99c2-28da-435a-b3bf-f3b7e71f783c","Type":"ContainerStarted","Data":"66fd314545e05ed63da54ab1c42c7631ccf9a4cbc7126ad4701c026bd0b26061"} Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.323657 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.323911 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.324122 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.324703 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.329119 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.335427 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.335825 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.336169 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.336505 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.336908 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: E1208 09:18:36.621400 4662 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-smctn.187f32df7462cf48 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-smctn,UID:09a4e6a7-2384-4ead-a4c7-396ff35e0bee,APIVersion:v1,ResourceVersion:29445,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 09:18:34.264252232 +0000 UTC m=+237.833280222,LastTimestamp:2025-12-08 09:18:34.264252232 +0000 UTC m=+237.833280222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.699505 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.700302 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.700785 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.701058 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:36 crc kubenswrapper[4662]: I1208 09:18:36.701462 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.147050 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.148159 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.148891 4662 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.149390 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.149805 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.150147 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.150620 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.150912 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.223277 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.223665 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.223712 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.223396 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.224024 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.224079 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.328039 4662 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.328077 4662 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.328087 4662 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.330814 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.331466 4662 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3" exitCode=0 Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.331543 4662 scope.go:117] "RemoveContainer" containerID="0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.331563 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.349094 4662 scope.go:117] "RemoveContainer" containerID="5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.353083 4662 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.353391 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.353633 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.353959 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.354154 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.354376 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.366241 4662 scope.go:117] "RemoveContainer" containerID="9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.398381 4662 scope.go:117] "RemoveContainer" containerID="af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.424882 4662 scope.go:117] "RemoveContainer" containerID="04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.451969 4662 scope.go:117] "RemoveContainer" containerID="aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.487647 4662 scope.go:117] "RemoveContainer" containerID="0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e" Dec 08 09:18:37 crc kubenswrapper[4662]: E1208 09:18:37.488920 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\": container with ID starting with 0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e not found: ID does not exist" containerID="0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.488969 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e"} err="failed to get container status \"0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\": rpc error: code = NotFound desc = could not find container \"0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e\": container with ID starting with 0dc8c833aa3521bfbef39d3c8864a6cf62b9706f31bd0ade41e1066e89d7806e not found: ID does not exist" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.488999 4662 scope.go:117] "RemoveContainer" containerID="5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41" Dec 08 09:18:37 crc kubenswrapper[4662]: E1208 09:18:37.490003 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\": container with ID starting with 5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41 not found: ID does not exist" containerID="5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.490053 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41"} err="failed to get container status \"5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\": rpc error: code = NotFound desc = could not find container \"5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41\": container with ID starting with 5d48f50bbc459b4056152ebc4863713ec4570349d84f8355d2e2d3b5681bca41 not found: ID does not exist" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.490085 4662 scope.go:117] "RemoveContainer" containerID="9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15" Dec 08 09:18:37 crc kubenswrapper[4662]: E1208 09:18:37.490542 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\": container with ID starting with 9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15 not found: ID does not exist" containerID="9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.490567 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15"} err="failed to get container status \"9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\": rpc error: code = NotFound desc = could not find container \"9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15\": container with ID starting with 9e5be146fc1d2729add952205b9e415a094647e037a99e0886e1ea6070339a15 not found: ID does not exist" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.490581 4662 scope.go:117] "RemoveContainer" containerID="af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782" Dec 08 09:18:37 crc kubenswrapper[4662]: E1208 09:18:37.490882 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\": container with ID starting with af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782 not found: ID does not exist" containerID="af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.490913 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782"} err="failed to get container status \"af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\": rpc error: code = NotFound desc = could not find container \"af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782\": container with ID starting with af05d9e8f56fb17400cec928d734c38a8251f6ecf9d6f06e9c6fc25ce653e782 not found: ID does not exist" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.490934 4662 scope.go:117] "RemoveContainer" containerID="04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3" Dec 08 09:18:37 crc kubenswrapper[4662]: E1208 09:18:37.491272 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\": container with ID starting with 04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3 not found: ID does not exist" containerID="04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.491316 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3"} err="failed to get container status \"04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\": rpc error: code = NotFound desc = could not find container \"04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3\": container with ID starting with 04d0698afee22922886c3cb40d00ca76297d4e875c3de3e9ff31a15b9c9d13b3 not found: ID does not exist" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.491347 4662 scope.go:117] "RemoveContainer" containerID="aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6" Dec 08 09:18:37 crc kubenswrapper[4662]: E1208 09:18:37.491651 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\": container with ID starting with aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6 not found: ID does not exist" containerID="aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6" Dec 08 09:18:37 crc kubenswrapper[4662]: I1208 09:18:37.491692 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6"} err="failed to get container status \"aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\": rpc error: code = NotFound desc = could not find container \"aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6\": container with ID starting with aa39c116eed43b5f50abab0d3bdd6264d4ab937376460e985d0dee59a6917ab6 not found: ID does not exist" Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.341214 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-smctn" event={"ID":"09a4e6a7-2384-4ead-a4c7-396ff35e0bee","Type":"ContainerStarted","Data":"0c972735abb1df1852d9510102e5dc9367d712a0cc2e4b47b7aed09d96466f37"} Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.341820 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.341994 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.342135 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.342279 4662 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.342413 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.342558 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:38 crc kubenswrapper[4662]: I1208 09:18:38.703620 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.667432 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.667945 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.713180 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.713788 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.714129 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.714487 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.714805 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:40 crc kubenswrapper[4662]: I1208 09:18:40.715083 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:41 crc kubenswrapper[4662]: I1208 09:18:41.397631 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sc8bf" Dec 08 09:18:41 crc kubenswrapper[4662]: I1208 09:18:41.398336 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:41 crc kubenswrapper[4662]: I1208 09:18:41.398886 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:41 crc kubenswrapper[4662]: I1208 09:18:41.399323 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:41 crc kubenswrapper[4662]: I1208 09:18:41.399616 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:41 crc kubenswrapper[4662]: I1208 09:18:41.399964 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.068240 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.068287 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.110640 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.111106 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.111346 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.111555 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.111792 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.112002 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.397727 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8lxnn" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.398228 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.398508 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.398725 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.398977 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.399218 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: E1208 09:18:42.445072 4662 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: E1208 09:18:42.445458 4662 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: E1208 09:18:42.445715 4662 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: E1208 09:18:42.445942 4662 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: E1208 09:18:42.446200 4662 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:42 crc kubenswrapper[4662]: I1208 09:18:42.446237 4662 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 09:18:42 crc kubenswrapper[4662]: E1208 09:18:42.446470 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="200ms" Dec 08 09:18:42 crc kubenswrapper[4662]: E1208 09:18:42.647101 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="400ms" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.049057 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="800ms" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.082953 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.083049 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.127670 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.128045 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.128236 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.128545 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.129099 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.129307 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.403197 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:18:43Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:18:43Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:18:43Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T09:18:43Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.403494 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.403771 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.403915 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.404049 4662 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.404066 4662 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.405266 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-smctn" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.405527 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.405681 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.405937 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.406305 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: I1208 09:18:43.406500 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:43 crc kubenswrapper[4662]: E1208 09:18:43.850021 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="1.6s" Dec 08 09:18:45 crc kubenswrapper[4662]: E1208 09:18:45.451158 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="3.2s" Dec 08 09:18:45 crc kubenswrapper[4662]: E1208 09:18:45.790892 4662 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" volumeName="registry-storage" Dec 08 09:18:46 crc kubenswrapper[4662]: E1208 09:18:46.622496 4662 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.190:6443: connect: connection refused" event="&Event{ObjectMeta:{community-operators-smctn.187f32df7462cf48 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-smctn,UID:09a4e6a7-2384-4ead-a4c7-396ff35e0bee,APIVersion:v1,ResourceVersion:29445,FieldPath:spec.initContainers{extract-content},},Reason:Pulling,Message:Pulling image \"registry.redhat.io/redhat/community-operator-index:v4.18\",Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 09:18:34.264252232 +0000 UTC m=+237.833280222,LastTimestamp:2025-12-08 09:18:34.264252232 +0000 UTC m=+237.833280222,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 09:18:46 crc kubenswrapper[4662]: I1208 09:18:46.706489 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:46 crc kubenswrapper[4662]: I1208 09:18:46.707035 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:46 crc kubenswrapper[4662]: I1208 09:18:46.707469 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:46 crc kubenswrapper[4662]: I1208 09:18:46.707934 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:46 crc kubenswrapper[4662]: I1208 09:18:46.708672 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:48 crc kubenswrapper[4662]: E1208 09:18:48.652535 4662 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.190:6443: connect: connection refused" interval="6.4s" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.697282 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.698186 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.698687 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.699080 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.699374 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.699706 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.723649 4662 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.723699 4662 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:48 crc kubenswrapper[4662]: E1208 09:18:48.724380 4662 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:48 crc kubenswrapper[4662]: I1208 09:18:48.725099 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:48 crc kubenswrapper[4662]: W1208 09:18:48.741142 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-40c849fa911175d4f091a8423c74e0a0046db307141c2829651cb3d2bbafc026 WatchSource:0}: Error finding container 40c849fa911175d4f091a8423c74e0a0046db307141c2829651cb3d2bbafc026: Status 404 returned error can't find the container with id 40c849fa911175d4f091a8423c74e0a0046db307141c2829651cb3d2bbafc026 Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.395178 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.395709 4662 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35" exitCode=1 Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.396240 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35"} Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.397091 4662 scope.go:117] "RemoveContainer" containerID="0a97f4fd130015334b1fab8e91c2ed2e477d96c552c741937d17371ad8c6ce35" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.397796 4662 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="a2df0b8939fb02f608fb9a426609b05f79b7da437d64d920bdde6525049127ac" exitCode=0 Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.397822 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"a2df0b8939fb02f608fb9a426609b05f79b7da437d64d920bdde6525049127ac"} Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.397847 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"40c849fa911175d4f091a8423c74e0a0046db307141c2829651cb3d2bbafc026"} Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.398070 4662 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.398085 4662 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:49 crc kubenswrapper[4662]: E1208 09:18:49.398309 4662 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.398340 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.398675 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.399159 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.399474 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.399708 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.400016 4662 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.400303 4662 status_manager.go:851] "Failed to get status for pod" podUID="09a4e6a7-2384-4ead-a4c7-396ff35e0bee" pod="openshift-marketplace/community-operators-smctn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-smctn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.400450 4662 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.400780 4662 status_manager.go:851] "Failed to get status for pod" podUID="ae8e99c2-28da-435a-b3bf-f3b7e71f783c" pod="openshift-marketplace/certified-operators-8lxnn" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-8lxnn\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.401042 4662 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.401262 4662 status_manager.go:851] "Failed to get status for pod" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:49 crc kubenswrapper[4662]: I1208 09:18:49.401502 4662 status_manager.go:851] "Failed to get status for pod" podUID="7473f855-7fa1-44f3-8841-6041a045c35a" pod="openshift-marketplace/redhat-marketplace-sc8bf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-sc8bf\": dial tcp 38.102.83.190:6443: connect: connection refused" Dec 08 09:18:50 crc kubenswrapper[4662]: I1208 09:18:50.410953 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Dec 08 09:18:50 crc kubenswrapper[4662]: I1208 09:18:50.411288 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"4dc9b925c3aa3dc20b7afa620ace8942ae7a2d146c0c2b0bd9462d0d1783af9a"} Dec 08 09:18:50 crc kubenswrapper[4662]: I1208 09:18:50.414409 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"38c8d61e641cc76eee3b495d33890d3c7cfa32c182065c75550d67968081050d"} Dec 08 09:18:50 crc kubenswrapper[4662]: I1208 09:18:50.414449 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"b778bde7ef145b08d9eb2f72507ef0fdb5f61ee2f86a6458bed46d360dfa5c34"} Dec 08 09:18:50 crc kubenswrapper[4662]: I1208 09:18:50.414477 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"632eb7d978fa8167862db05fb9e3675571912b6e5c87a1ab33897796ee1e9d45"} Dec 08 09:18:50 crc kubenswrapper[4662]: I1208 09:18:50.561330 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:18:51 crc kubenswrapper[4662]: I1208 09:18:51.423394 4662 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:51 crc kubenswrapper[4662]: I1208 09:18:51.423678 4662 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:51 crc kubenswrapper[4662]: I1208 09:18:51.423594 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1dbf2791dd63473c2fd7ddb878cfcaf6f9e707e6de3e564a38adea7ee55a4800"} Dec 08 09:18:51 crc kubenswrapper[4662]: I1208 09:18:51.423790 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:51 crc kubenswrapper[4662]: I1208 09:18:51.423804 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9bba8808306dce8d0f4ce601e4379f0191ae7d40cf4641b424033b9c309a7673"} Dec 08 09:18:52 crc kubenswrapper[4662]: I1208 09:18:52.660233 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:18:52 crc kubenswrapper[4662]: I1208 09:18:52.661220 4662 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Dec 08 09:18:52 crc kubenswrapper[4662]: I1208 09:18:52.661267 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.227512 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" podUID="818d198d-782d-4af8-b2f0-0d752ecb5621" containerName="oauth-openshift" containerID="cri-o://071e7d9b2be64e9538c4096f781758f6e53ecfb493a718badb5ec5c07a9892ab" gracePeriod=15 Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.441798 4662 generic.go:334] "Generic (PLEG): container finished" podID="818d198d-782d-4af8-b2f0-0d752ecb5621" containerID="071e7d9b2be64e9538c4096f781758f6e53ecfb493a718badb5ec5c07a9892ab" exitCode=0 Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.441867 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" event={"ID":"818d198d-782d-4af8-b2f0-0d752ecb5621","Type":"ContainerDied","Data":"071e7d9b2be64e9538c4096f781758f6e53ecfb493a718badb5ec5c07a9892ab"} Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.726208 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.726262 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.737521 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.807934 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951592 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-cliconfig\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951660 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-trusted-ca-bundle\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951687 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-serving-cert\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951765 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-error\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951790 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-policies\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951812 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-ocp-branding-template\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951887 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-login\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951932 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-provider-selection\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.951964 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vvp28\" (UniqueName: \"kubernetes.io/projected/818d198d-782d-4af8-b2f0-0d752ecb5621-kube-api-access-vvp28\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.952011 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-session\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.952042 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-router-certs\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.952186 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-service-ca\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.952220 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-idp-0-file-data\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.952263 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-dir\") pod \"818d198d-782d-4af8-b2f0-0d752ecb5621\" (UID: \"818d198d-782d-4af8-b2f0-0d752ecb5621\") " Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.953310 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.954268 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.954683 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.955055 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.955623 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.955779 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.955807 4662 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.955826 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.955846 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.955864 4662 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/818d198d-782d-4af8-b2f0-0d752ecb5621-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.960786 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818d198d-782d-4af8-b2f0-0d752ecb5621-kube-api-access-vvp28" (OuterVolumeSpecName: "kube-api-access-vvp28") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "kube-api-access-vvp28". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.961474 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.961816 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.962562 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.963055 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.963233 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.963381 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.963595 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:53 crc kubenswrapper[4662]: I1208 09:18:53.964348 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "818d198d-782d-4af8-b2f0-0d752ecb5621" (UID: "818d198d-782d-4af8-b2f0-0d752ecb5621"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.056938 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.056968 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.057001 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vvp28\" (UniqueName: \"kubernetes.io/projected/818d198d-782d-4af8-b2f0-0d752ecb5621-kube-api-access-vvp28\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.057011 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.057019 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.057028 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.057039 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.057047 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.057055 4662 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/818d198d-782d-4af8-b2f0-0d752ecb5621-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.450320 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" event={"ID":"818d198d-782d-4af8-b2f0-0d752ecb5621","Type":"ContainerDied","Data":"bcb72e40c14968b99dea4a213370c15091975e8f83fe875204651ddc5e1ae014"} Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.450379 4662 scope.go:117] "RemoveContainer" containerID="071e7d9b2be64e9538c4096f781758f6e53ecfb493a718badb5ec5c07a9892ab" Dec 08 09:18:54 crc kubenswrapper[4662]: I1208 09:18:54.450383 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-5xdtn" Dec 08 09:18:56 crc kubenswrapper[4662]: I1208 09:18:56.437143 4662 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:56 crc kubenswrapper[4662]: E1208 09:18:56.454757 4662 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\": Failed to watch *v1.Secret: unknown (get secrets)" logger="UnhandledError" Dec 08 09:18:56 crc kubenswrapper[4662]: I1208 09:18:56.460993 4662 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:56 crc kubenswrapper[4662]: I1208 09:18:56.461110 4662 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:56 crc kubenswrapper[4662]: I1208 09:18:56.466292 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:18:56 crc kubenswrapper[4662]: E1208 09:18:56.685033 4662 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: unknown (get configmaps)" logger="UnhandledError" Dec 08 09:18:56 crc kubenswrapper[4662]: I1208 09:18:56.718305 4662 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="51f17fa6-3b6e-402d-bcac-ca25f92076c9" Dec 08 09:18:57 crc kubenswrapper[4662]: I1208 09:18:57.466913 4662 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:57 crc kubenswrapper[4662]: I1208 09:18:57.466947 4662 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:18:57 crc kubenswrapper[4662]: I1208 09:18:57.470107 4662 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="51f17fa6-3b6e-402d-bcac-ca25f92076c9" Dec 08 09:19:02 crc kubenswrapper[4662]: I1208 09:19:02.664311 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:19:02 crc kubenswrapper[4662]: I1208 09:19:02.668011 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 09:19:06 crc kubenswrapper[4662]: I1208 09:19:06.139865 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Dec 08 09:19:06 crc kubenswrapper[4662]: I1208 09:19:06.479590 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Dec 08 09:19:06 crc kubenswrapper[4662]: I1208 09:19:06.838476 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Dec 08 09:19:06 crc kubenswrapper[4662]: I1208 09:19:06.863337 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Dec 08 09:19:06 crc kubenswrapper[4662]: I1208 09:19:06.866507 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Dec 08 09:19:07 crc kubenswrapper[4662]: I1208 09:19:07.030347 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Dec 08 09:19:07 crc kubenswrapper[4662]: I1208 09:19:07.077931 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Dec 08 09:19:07 crc kubenswrapper[4662]: I1208 09:19:07.233623 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Dec 08 09:19:07 crc kubenswrapper[4662]: I1208 09:19:07.395960 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Dec 08 09:19:07 crc kubenswrapper[4662]: I1208 09:19:07.544456 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Dec 08 09:19:07 crc kubenswrapper[4662]: I1208 09:19:07.736427 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Dec 08 09:19:08 crc kubenswrapper[4662]: I1208 09:19:08.318210 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Dec 08 09:19:08 crc kubenswrapper[4662]: I1208 09:19:08.381226 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Dec 08 09:19:08 crc kubenswrapper[4662]: I1208 09:19:08.438868 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Dec 08 09:19:08 crc kubenswrapper[4662]: I1208 09:19:08.871422 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Dec 08 09:19:08 crc kubenswrapper[4662]: I1208 09:19:08.949158 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Dec 08 09:19:08 crc kubenswrapper[4662]: I1208 09:19:08.992962 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.032165 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.037116 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.046517 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.052129 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.181160 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.183958 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.370414 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.475554 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.543718 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.624537 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.641230 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.741644 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.789109 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.854433 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.871190 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.882616 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Dec 08 09:19:09 crc kubenswrapper[4662]: I1208 09:19:09.888549 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.047179 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.067369 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.073839 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.275076 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.411806 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.639043 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.728589 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.810714 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.858856 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.894868 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.904369 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Dec 08 09:19:10 crc kubenswrapper[4662]: I1208 09:19:10.967218 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.011703 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.014414 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.043689 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.065694 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.133241 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.148431 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.251886 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.274290 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.280572 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.289643 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.345285 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.505253 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.652584 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.669263 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.733365 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.743108 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.743809 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.780582 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.832969 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.871652 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.926020 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.940007 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Dec 08 09:19:11 crc kubenswrapper[4662]: I1208 09:19:11.989100 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.010135 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.036798 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.088011 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.123334 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.221768 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.246935 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.275881 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.333365 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.576526 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.671108 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.700491 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.738216 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.890167 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.912548 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Dec 08 09:19:12 crc kubenswrapper[4662]: I1208 09:19:12.916386 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.010760 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.030370 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.061239 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.134705 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.149802 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.173160 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.181201 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.222086 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.265035 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.287663 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.376725 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.439237 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.451451 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.534237 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.542829 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.550489 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.637592 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.685200 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.714693 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.743237 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.821338 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.827274 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.839484 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.850012 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.920876 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.936074 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Dec 08 09:19:13 crc kubenswrapper[4662]: I1208 09:19:13.996161 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.013659 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.280521 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.331269 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.490602 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.504907 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.534111 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.542186 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.574610 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.657854 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.678438 4662 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.750547 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.782211 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.788559 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.839252 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.863354 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Dec 08 09:19:14 crc kubenswrapper[4662]: I1208 09:19:14.994594 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.019005 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.149019 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.205053 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.358235 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.602311 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.630803 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.727027 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.733049 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.833294 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Dec 08 09:19:15 crc kubenswrapper[4662]: I1208 09:19:15.934218 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.086422 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.222834 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.266862 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.266991 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.340462 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.402026 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.423732 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.528052 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.531480 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.573498 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.709442 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.870413 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.875620 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.900959 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.944217 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.970412 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Dec 08 09:19:16 crc kubenswrapper[4662]: I1208 09:19:16.988142 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.102802 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.186073 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.322025 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.330833 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.339628 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.494996 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.521774 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.597266 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.670705 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.678459 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.732045 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.748881 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.758497 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.808195 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.808266 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.866735 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.897849 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.913280 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.970196 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Dec 08 09:19:17 crc kubenswrapper[4662]: I1208 09:19:17.975457 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.053359 4662 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.064875 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.179229 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.195967 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.291523 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.349145 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.391133 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.507697 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.646193 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.699969 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.734863 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.875410 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.881684 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.933082 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.987732 4662 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.988146 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-smctn" podStartSLOduration=44.150009972 podStartE2EDuration="46.988107284s" podCreationTimestamp="2025-12-08 09:18:32 +0000 UTC" firstStartedPulling="2025-12-08 09:18:34.264248562 +0000 UTC m=+237.833276552" lastFinishedPulling="2025-12-08 09:18:37.102345874 +0000 UTC m=+240.671373864" observedRunningTime="2025-12-08 09:18:56.295581896 +0000 UTC m=+259.864609886" watchObservedRunningTime="2025-12-08 09:19:18.988107284 +0000 UTC m=+282.557135304" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.989555 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sc8bf" podStartSLOduration=46.429138188 podStartE2EDuration="48.989544153s" podCreationTimestamp="2025-12-08 09:18:30 +0000 UTC" firstStartedPulling="2025-12-08 09:18:31.199590291 +0000 UTC m=+234.768618281" lastFinishedPulling="2025-12-08 09:18:33.759996256 +0000 UTC m=+237.329024246" observedRunningTime="2025-12-08 09:18:56.265508024 +0000 UTC m=+259.834536014" watchObservedRunningTime="2025-12-08 09:19:18.989544153 +0000 UTC m=+282.558572153" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.989961 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=44.989953454 podStartE2EDuration="44.989953454s" podCreationTimestamp="2025-12-08 09:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:18:56.235098282 +0000 UTC m=+259.804126272" watchObservedRunningTime="2025-12-08 09:19:18.989953454 +0000 UTC m=+282.558981464" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.992680 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8lxnn" podStartSLOduration=45.475156856 podStartE2EDuration="47.992669478s" podCreationTimestamp="2025-12-08 09:18:31 +0000 UTC" firstStartedPulling="2025-12-08 09:18:33.255386272 +0000 UTC m=+236.824414262" lastFinishedPulling="2025-12-08 09:18:35.772898894 +0000 UTC m=+239.341926884" observedRunningTime="2025-12-08 09:18:56.223958307 +0000 UTC m=+259.792986337" watchObservedRunningTime="2025-12-08 09:19:18.992669478 +0000 UTC m=+282.561697478" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.993686 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-5xdtn"] Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.993876 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-6f96647944-qv7hl"] Dec 08 09:19:18 crc kubenswrapper[4662]: E1208 09:19:18.994165 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" containerName="installer" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.994266 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" containerName="installer" Dec 08 09:19:18 crc kubenswrapper[4662]: E1208 09:19:18.994380 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818d198d-782d-4af8-b2f0-0d752ecb5621" containerName="oauth-openshift" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.994475 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="818d198d-782d-4af8-b2f0-0d752ecb5621" containerName="oauth-openshift" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.994506 4662 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.994699 4662 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="ea3eddb9-01cf-49f0-81cc-c9534bfe50b6" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.994861 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="818d198d-782d-4af8-b2f0-0d752ecb5621" containerName="oauth-openshift" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.994964 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="73a8fe6b-f642-4ddc-a7ff-a13f1c87bea1" containerName="installer" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.995482 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:18 crc kubenswrapper[4662]: I1208 09:19:18.999426 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.000719 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.000824 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.002612 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.003429 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.003452 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.007373 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.010516 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.014508 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.020234 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.020818 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.020882 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.010659 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.041017 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.041770 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.044621 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.082588 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=23.082555386 podStartE2EDuration="23.082555386s" podCreationTimestamp="2025-12-08 09:18:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:19:19.081179378 +0000 UTC m=+282.650207388" watchObservedRunningTime="2025-12-08 09:19:19.082555386 +0000 UTC m=+282.651583376" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.095850 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.095916 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.095965 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096086 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2wp\" (UniqueName: \"kubernetes.io/projected/982fbe66-a7b8-45d1-a007-e722ae04cc99-kube-api-access-wc2wp\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096166 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096246 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096283 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096325 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096394 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/982fbe66-a7b8-45d1-a007-e722ae04cc99-audit-dir\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096429 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096473 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096546 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096582 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.096625 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-audit-policies\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198103 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198171 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198194 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc2wp\" (UniqueName: \"kubernetes.io/projected/982fbe66-a7b8-45d1-a007-e722ae04cc99-kube-api-access-wc2wp\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198219 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198244 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198269 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198295 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198326 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/982fbe66-a7b8-45d1-a007-e722ae04cc99-audit-dir\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198349 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198379 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198414 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198437 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198462 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-audit-policies\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.198488 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.199662 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-service-ca\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.200526 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-audit-policies\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.200586 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/982fbe66-a7b8-45d1-a007-e722ae04cc99-audit-dir\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.202395 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.203052 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-cliconfig\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.205914 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-session\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.206632 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-error\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.206849 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-login\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.207083 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.207548 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-router-certs\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.210202 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.216427 4662 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.221289 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-system-serving-cert\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.222116 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc2wp\" (UniqueName: \"kubernetes.io/projected/982fbe66-a7b8-45d1-a007-e722ae04cc99-kube-api-access-wc2wp\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.222384 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/982fbe66-a7b8-45d1-a007-e722ae04cc99-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-6f96647944-qv7hl\" (UID: \"982fbe66-a7b8-45d1-a007-e722ae04cc99\") " pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.239235 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.277579 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.356421 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.455031 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.462066 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.511288 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.552109 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.556714 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.569530 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-6f96647944-qv7hl"] Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.598090 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" event={"ID":"982fbe66-a7b8-45d1-a007-e722ae04cc99","Type":"ContainerStarted","Data":"2a5b91f3e9c036bc93fda5275a44ee29b666d38354232b1fa62740ef9c5e4569"} Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.660381 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.683617 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.703980 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.708398 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.723542 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.732728 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.840324 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.851588 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.855073 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.938182 4662 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.992209 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Dec 08 09:19:19 crc kubenswrapper[4662]: I1208 09:19:19.993907 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.009074 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.037949 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.092521 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.137085 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.604256 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" event={"ID":"982fbe66-a7b8-45d1-a007-e722ae04cc99","Type":"ContainerStarted","Data":"3f1fb53496c6e08e7f37031d22e2fe25944808a8de4541661fb5c944b006940b"} Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.604605 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.610167 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.622112 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-6f96647944-qv7hl" podStartSLOduration=52.622095576 podStartE2EDuration="52.622095576s" podCreationTimestamp="2025-12-08 09:18:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:19:20.621916881 +0000 UTC m=+284.190944871" watchObservedRunningTime="2025-12-08 09:19:20.622095576 +0000 UTC m=+284.191123566" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.634118 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.676032 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.709548 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="818d198d-782d-4af8-b2f0-0d752ecb5621" path="/var/lib/kubelet/pods/818d198d-782d-4af8-b2f0-0d752ecb5621/volumes" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.901635 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Dec 08 09:19:20 crc kubenswrapper[4662]: I1208 09:19:20.980634 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.013989 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.322341 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.326902 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.344592 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.391512 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.465371 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.521836 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.539122 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.792242 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.876969 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 08 09:19:21 crc kubenswrapper[4662]: I1208 09:19:21.964096 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Dec 08 09:19:22 crc kubenswrapper[4662]: I1208 09:19:22.186906 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Dec 08 09:19:22 crc kubenswrapper[4662]: I1208 09:19:22.227231 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Dec 08 09:19:22 crc kubenswrapper[4662]: I1208 09:19:22.310660 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Dec 08 09:19:22 crc kubenswrapper[4662]: I1208 09:19:22.351236 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Dec 08 09:19:22 crc kubenswrapper[4662]: I1208 09:19:22.412717 4662 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Dec 08 09:19:22 crc kubenswrapper[4662]: I1208 09:19:22.667380 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Dec 08 09:19:30 crc kubenswrapper[4662]: I1208 09:19:30.219479 4662 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 09:19:30 crc kubenswrapper[4662]: I1208 09:19:30.220007 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ac2f4529fbb7c4d8666f20901a05db06bd04eeb3030db8b15b0e4300f001cac1" gracePeriod=5 Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.704991 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.705330 4662 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ac2f4529fbb7c4d8666f20901a05db06bd04eeb3030db8b15b0e4300f001cac1" exitCode=137 Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.833867 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.833963 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900064 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900212 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900248 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900238 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900287 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900394 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900709 4662 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900819 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900882 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.900942 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:19:35 crc kubenswrapper[4662]: I1208 09:19:35.910148 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.001472 4662 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.001732 4662 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.001849 4662 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.001924 4662 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.567463 4662 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.706292 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.706668 4662 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.717958 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.718121 4662 scope.go:117] "RemoveContainer" containerID="ac2f4529fbb7c4d8666f20901a05db06bd04eeb3030db8b15b0e4300f001cac1" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.718119 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.719848 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.719899 4662 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e320d6af-91c1-4c14-8278-14c30715d718" Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.725036 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 09:19:36 crc kubenswrapper[4662]: I1208 09:19:36.725097 4662 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e320d6af-91c1-4c14-8278-14c30715d718" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.223103 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hf98q"] Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.223614 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" podUID="2560bbb9-084c-4976-9656-373cfd6aeb69" containerName="controller-manager" containerID="cri-o://d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581" gracePeriod=30 Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.337657 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv"] Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.338095 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" podUID="9480540c-cb7a-4822-9b8d-aeb553b74ab4" containerName="route-controller-manager" containerID="cri-o://7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e" gracePeriod=30 Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.546724 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.644135 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2560bbb9-084c-4976-9656-373cfd6aeb69-serving-cert\") pod \"2560bbb9-084c-4976-9656-373cfd6aeb69\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.644201 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-config\") pod \"2560bbb9-084c-4976-9656-373cfd6aeb69\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.644277 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-client-ca\") pod \"2560bbb9-084c-4976-9656-373cfd6aeb69\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.644308 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lklc\" (UniqueName: \"kubernetes.io/projected/2560bbb9-084c-4976-9656-373cfd6aeb69-kube-api-access-9lklc\") pod \"2560bbb9-084c-4976-9656-373cfd6aeb69\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.644355 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-proxy-ca-bundles\") pod \"2560bbb9-084c-4976-9656-373cfd6aeb69\" (UID: \"2560bbb9-084c-4976-9656-373cfd6aeb69\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.645597 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2560bbb9-084c-4976-9656-373cfd6aeb69" (UID: "2560bbb9-084c-4976-9656-373cfd6aeb69"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.646766 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-client-ca" (OuterVolumeSpecName: "client-ca") pod "2560bbb9-084c-4976-9656-373cfd6aeb69" (UID: "2560bbb9-084c-4976-9656-373cfd6aeb69"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.647298 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-config" (OuterVolumeSpecName: "config") pod "2560bbb9-084c-4976-9656-373cfd6aeb69" (UID: "2560bbb9-084c-4976-9656-373cfd6aeb69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.654709 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2560bbb9-084c-4976-9656-373cfd6aeb69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2560bbb9-084c-4976-9656-373cfd6aeb69" (UID: "2560bbb9-084c-4976-9656-373cfd6aeb69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.673468 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-658fd5994d-7tf7p"] Dec 08 09:19:39 crc kubenswrapper[4662]: E1208 09:19:39.673657 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2560bbb9-084c-4976-9656-373cfd6aeb69" containerName="controller-manager" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.673677 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2560bbb9-084c-4976-9656-373cfd6aeb69" containerName="controller-manager" Dec 08 09:19:39 crc kubenswrapper[4662]: E1208 09:19:39.673690 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.673695 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.673795 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.673812 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2560bbb9-084c-4976-9656-373cfd6aeb69" containerName="controller-manager" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.674113 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.712722 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2560bbb9-084c-4976-9656-373cfd6aeb69-kube-api-access-9lklc" (OuterVolumeSpecName: "kube-api-access-9lklc") pod "2560bbb9-084c-4976-9656-373cfd6aeb69" (UID: "2560bbb9-084c-4976-9656-373cfd6aeb69"). InnerVolumeSpecName "kube-api-access-9lklc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.712902 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658fd5994d-7tf7p"] Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.720865 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.738999 4662 generic.go:334] "Generic (PLEG): container finished" podID="2560bbb9-084c-4976-9656-373cfd6aeb69" containerID="d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581" exitCode=0 Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.739099 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.739102 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" event={"ID":"2560bbb9-084c-4976-9656-373cfd6aeb69","Type":"ContainerDied","Data":"d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581"} Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.739129 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-hf98q" event={"ID":"2560bbb9-084c-4976-9656-373cfd6aeb69","Type":"ContainerDied","Data":"373428cb7b4d270b509454e87b807accf67aa511ecb1b1095e4eaea99aed64b4"} Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.739147 4662 scope.go:117] "RemoveContainer" containerID="d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751663 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s6fm\" (UniqueName: \"kubernetes.io/projected/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-kube-api-access-4s6fm\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751728 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-config\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751773 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-client-ca\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751853 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-proxy-ca-bundles\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751906 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-serving-cert\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751949 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2560bbb9-084c-4976-9656-373cfd6aeb69-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751959 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751968 4662 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751976 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lklc\" (UniqueName: \"kubernetes.io/projected/2560bbb9-084c-4976-9656-373cfd6aeb69-kube-api-access-9lklc\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.751987 4662 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2560bbb9-084c-4976-9656-373cfd6aeb69-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.763338 4662 generic.go:334] "Generic (PLEG): container finished" podID="9480540c-cb7a-4822-9b8d-aeb553b74ab4" containerID="7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e" exitCode=0 Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.763373 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" event={"ID":"9480540c-cb7a-4822-9b8d-aeb553b74ab4","Type":"ContainerDied","Data":"7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e"} Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.763413 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" event={"ID":"9480540c-cb7a-4822-9b8d-aeb553b74ab4","Type":"ContainerDied","Data":"c20460ba8661c04825913269da49b463e442fda05048e01526cbb90d20a866d8"} Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.763461 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.784372 4662 scope.go:117] "RemoveContainer" containerID="d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581" Dec 08 09:19:39 crc kubenswrapper[4662]: E1208 09:19:39.788538 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581\": container with ID starting with d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581 not found: ID does not exist" containerID="d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.788614 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581"} err="failed to get container status \"d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581\": rpc error: code = NotFound desc = could not find container \"d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581\": container with ID starting with d7c1dbf931794821ad3235d45682724d3ad2271f9c8484c6bcd38ccc10ba7581 not found: ID does not exist" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.788684 4662 scope.go:117] "RemoveContainer" containerID="7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.798777 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984"] Dec 08 09:19:39 crc kubenswrapper[4662]: E1208 09:19:39.801973 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9480540c-cb7a-4822-9b8d-aeb553b74ab4" containerName="route-controller-manager" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.802016 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="9480540c-cb7a-4822-9b8d-aeb553b74ab4" containerName="route-controller-manager" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.802287 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="9480540c-cb7a-4822-9b8d-aeb553b74ab4" containerName="route-controller-manager" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.802958 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.811330 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hf98q"] Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.819882 4662 scope.go:117] "RemoveContainer" containerID="7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e" Dec 08 09:19:39 crc kubenswrapper[4662]: E1208 09:19:39.820278 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e\": container with ID starting with 7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e not found: ID does not exist" containerID="7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.820327 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e"} err="failed to get container status \"7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e\": rpc error: code = NotFound desc = could not find container \"7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e\": container with ID starting with 7fc1bbca4a9c54f64de546474bc0cc9a603249907b6cc09daeb584167e3b406e not found: ID does not exist" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.825772 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-hf98q"] Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.831012 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984"] Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.852571 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-config\") pod \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.852619 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9480540c-cb7a-4822-9b8d-aeb553b74ab4-serving-cert\") pod \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.852665 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-client-ca\") pod \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.852692 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7vgl\" (UniqueName: \"kubernetes.io/projected/9480540c-cb7a-4822-9b8d-aeb553b74ab4-kube-api-access-v7vgl\") pod \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\" (UID: \"9480540c-cb7a-4822-9b8d-aeb553b74ab4\") " Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.852909 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s6fm\" (UniqueName: \"kubernetes.io/projected/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-kube-api-access-4s6fm\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.852953 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-config\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.852985 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-client-ca\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.853012 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-serving-cert\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.853159 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-client-ca\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.853202 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-proxy-ca-bundles\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.853233 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvfkw\" (UniqueName: \"kubernetes.io/projected/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-kube-api-access-pvfkw\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.853265 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-config\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.853294 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-serving-cert\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.854029 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-client-ca\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.854112 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-client-ca" (OuterVolumeSpecName: "client-ca") pod "9480540c-cb7a-4822-9b8d-aeb553b74ab4" (UID: "9480540c-cb7a-4822-9b8d-aeb553b74ab4"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.854239 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-config\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.854622 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-config" (OuterVolumeSpecName: "config") pod "9480540c-cb7a-4822-9b8d-aeb553b74ab4" (UID: "9480540c-cb7a-4822-9b8d-aeb553b74ab4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.855198 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-proxy-ca-bundles\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.857442 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-serving-cert\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.857295 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9480540c-cb7a-4822-9b8d-aeb553b74ab4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9480540c-cb7a-4822-9b8d-aeb553b74ab4" (UID: "9480540c-cb7a-4822-9b8d-aeb553b74ab4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.859057 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9480540c-cb7a-4822-9b8d-aeb553b74ab4-kube-api-access-v7vgl" (OuterVolumeSpecName: "kube-api-access-v7vgl") pod "9480540c-cb7a-4822-9b8d-aeb553b74ab4" (UID: "9480540c-cb7a-4822-9b8d-aeb553b74ab4"). InnerVolumeSpecName "kube-api-access-v7vgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.869998 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s6fm\" (UniqueName: \"kubernetes.io/projected/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-kube-api-access-4s6fm\") pod \"controller-manager-658fd5994d-7tf7p\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.955952 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-serving-cert\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.956297 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-client-ca\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.956344 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvfkw\" (UniqueName: \"kubernetes.io/projected/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-kube-api-access-pvfkw\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.956373 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-config\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.956484 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.956499 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9480540c-cb7a-4822-9b8d-aeb553b74ab4-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.956510 4662 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9480540c-cb7a-4822-9b8d-aeb553b74ab4-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.956523 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v7vgl\" (UniqueName: \"kubernetes.io/projected/9480540c-cb7a-4822-9b8d-aeb553b74ab4-kube-api-access-v7vgl\") on node \"crc\" DevicePath \"\"" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.957817 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-config\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.959161 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-client-ca\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.959830 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-serving-cert\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:39 crc kubenswrapper[4662]: I1208 09:19:39.974260 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvfkw\" (UniqueName: \"kubernetes.io/projected/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-kube-api-access-pvfkw\") pod \"route-controller-manager-75664bd6d9-9p984\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.041209 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.091698 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv"] Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.094979 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-q7vpv"] Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.126049 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.224602 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-658fd5994d-7tf7p"] Dec 08 09:19:40 crc kubenswrapper[4662]: W1208 09:19:40.237113 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2573fbbb_f03f_4c85_b27f_49cfcd239ec3.slice/crio-e092e64aa14f59c8da43caf92fbea70cd6e4128b6fa447eb1119f0babccd4616 WatchSource:0}: Error finding container e092e64aa14f59c8da43caf92fbea70cd6e4128b6fa447eb1119f0babccd4616: Status 404 returned error can't find the container with id e092e64aa14f59c8da43caf92fbea70cd6e4128b6fa447eb1119f0babccd4616 Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.336045 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984"] Dec 08 09:19:40 crc kubenswrapper[4662]: W1208 09:19:40.345718 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39c3f664_01b9_4f9a_ba3a_52ac30aef9ba.slice/crio-17cc2fa78da78a7d12a9c944393539166e4bd36a5dcc95841ecdffc3e9cba1e6 WatchSource:0}: Error finding container 17cc2fa78da78a7d12a9c944393539166e4bd36a5dcc95841ecdffc3e9cba1e6: Status 404 returned error can't find the container with id 17cc2fa78da78a7d12a9c944393539166e4bd36a5dcc95841ecdffc3e9cba1e6 Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.663297 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hwq58"] Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.664397 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.667060 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.684504 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwq58"] Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.703687 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2560bbb9-084c-4976-9656-373cfd6aeb69" path="/var/lib/kubelet/pods/2560bbb9-084c-4976-9656-373cfd6aeb69/volumes" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.704459 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9480540c-cb7a-4822-9b8d-aeb553b74ab4" path="/var/lib/kubelet/pods/9480540c-cb7a-4822-9b8d-aeb553b74ab4/volumes" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.766474 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflcz\" (UniqueName: \"kubernetes.io/projected/10201fba-2819-468a-ac74-115ada895ae7-kube-api-access-zflcz\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.766535 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-utilities\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.766567 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-catalog-content\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.771061 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" event={"ID":"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba","Type":"ContainerStarted","Data":"a55294a6190b675127db4d35d87bd931e3e4058a3c47749d2482c929dda8fead"} Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.771102 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" event={"ID":"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba","Type":"ContainerStarted","Data":"17cc2fa78da78a7d12a9c944393539166e4bd36a5dcc95841ecdffc3e9cba1e6"} Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.771255 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.774059 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" event={"ID":"2573fbbb-f03f-4c85-b27f-49cfcd239ec3","Type":"ContainerStarted","Data":"3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce"} Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.774090 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" event={"ID":"2573fbbb-f03f-4c85-b27f-49cfcd239ec3","Type":"ContainerStarted","Data":"e092e64aa14f59c8da43caf92fbea70cd6e4128b6fa447eb1119f0babccd4616"} Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.774184 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.779000 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.790925 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" podStartSLOduration=1.790904542 podStartE2EDuration="1.790904542s" podCreationTimestamp="2025-12-08 09:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:19:40.786652987 +0000 UTC m=+304.355680977" watchObservedRunningTime="2025-12-08 09:19:40.790904542 +0000 UTC m=+304.359932542" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.849250 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" podStartSLOduration=1.8492336950000001 podStartE2EDuration="1.849233695s" podCreationTimestamp="2025-12-08 09:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:19:40.848381111 +0000 UTC m=+304.417409111" watchObservedRunningTime="2025-12-08 09:19:40.849233695 +0000 UTC m=+304.418261685" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.867311 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflcz\" (UniqueName: \"kubernetes.io/projected/10201fba-2819-468a-ac74-115ada895ae7-kube-api-access-zflcz\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.867377 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-utilities\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.867426 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-catalog-content\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.868263 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-utilities\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.868390 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-catalog-content\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.884592 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflcz\" (UniqueName: \"kubernetes.io/projected/10201fba-2819-468a-ac74-115ada895ae7-kube-api-access-zflcz\") pod \"redhat-operators-hwq58\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.924697 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:19:40 crc kubenswrapper[4662]: I1208 09:19:40.979085 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:41 crc kubenswrapper[4662]: I1208 09:19:41.197084 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hwq58"] Dec 08 09:19:41 crc kubenswrapper[4662]: W1208 09:19:41.201269 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10201fba_2819_468a_ac74_115ada895ae7.slice/crio-f7d020b67b93d90c0875141944b4070d04276425c17101361f83d0ab986ee8ad WatchSource:0}: Error finding container f7d020b67b93d90c0875141944b4070d04276425c17101361f83d0ab986ee8ad: Status 404 returned error can't find the container with id f7d020b67b93d90c0875141944b4070d04276425c17101361f83d0ab986ee8ad Dec 08 09:19:41 crc kubenswrapper[4662]: I1208 09:19:41.780487 4662 generic.go:334] "Generic (PLEG): container finished" podID="10201fba-2819-468a-ac74-115ada895ae7" containerID="5127789e4675b48c9212bf974fd4244dee84e28f67889571c26fc258917876a3" exitCode=0 Dec 08 09:19:41 crc kubenswrapper[4662]: I1208 09:19:41.780569 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwq58" event={"ID":"10201fba-2819-468a-ac74-115ada895ae7","Type":"ContainerDied","Data":"5127789e4675b48c9212bf974fd4244dee84e28f67889571c26fc258917876a3"} Dec 08 09:19:41 crc kubenswrapper[4662]: I1208 09:19:41.780792 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwq58" event={"ID":"10201fba-2819-468a-ac74-115ada895ae7","Type":"ContainerStarted","Data":"f7d020b67b93d90c0875141944b4070d04276425c17101361f83d0ab986ee8ad"} Dec 08 09:19:42 crc kubenswrapper[4662]: I1208 09:19:42.789191 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwq58" event={"ID":"10201fba-2819-468a-ac74-115ada895ae7","Type":"ContainerStarted","Data":"2f4c7c69faa284c28b66b53b915abc7408cd911ee7d5eaa97ff2a594a1247447"} Dec 08 09:19:43 crc kubenswrapper[4662]: I1208 09:19:43.795254 4662 generic.go:334] "Generic (PLEG): container finished" podID="10201fba-2819-468a-ac74-115ada895ae7" containerID="2f4c7c69faa284c28b66b53b915abc7408cd911ee7d5eaa97ff2a594a1247447" exitCode=0 Dec 08 09:19:43 crc kubenswrapper[4662]: I1208 09:19:43.795693 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwq58" event={"ID":"10201fba-2819-468a-ac74-115ada895ae7","Type":"ContainerDied","Data":"2f4c7c69faa284c28b66b53b915abc7408cd911ee7d5eaa97ff2a594a1247447"} Dec 08 09:19:44 crc kubenswrapper[4662]: I1208 09:19:44.802452 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwq58" event={"ID":"10201fba-2819-468a-ac74-115ada895ae7","Type":"ContainerStarted","Data":"33eb88e67cdd43d1b8bc9af05fa455df5cda07335c1927ec6fb632c6f6911845"} Dec 08 09:19:44 crc kubenswrapper[4662]: I1208 09:19:44.826064 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hwq58" podStartSLOduration=2.427018803 podStartE2EDuration="4.826049512s" podCreationTimestamp="2025-12-08 09:19:40 +0000 UTC" firstStartedPulling="2025-12-08 09:19:41.782856855 +0000 UTC m=+305.351884845" lastFinishedPulling="2025-12-08 09:19:44.181887564 +0000 UTC m=+307.750915554" observedRunningTime="2025-12-08 09:19:44.821863938 +0000 UTC m=+308.390891928" watchObservedRunningTime="2025-12-08 09:19:44.826049512 +0000 UTC m=+308.395077502" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.062995 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7b754"] Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.064299 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.071977 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7b754"] Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.217770 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnqqb\" (UniqueName: \"kubernetes.io/projected/23663fdd-b72b-4956-9b86-5da768bcf9c1-kube-api-access-mnqqb\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.217845 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-utilities\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.217876 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-catalog-content\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.319205 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-utilities\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.319273 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-catalog-content\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.319355 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnqqb\" (UniqueName: \"kubernetes.io/projected/23663fdd-b72b-4956-9b86-5da768bcf9c1-kube-api-access-mnqqb\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.320169 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-utilities\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.320178 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-catalog-content\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.346454 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnqqb\" (UniqueName: \"kubernetes.io/projected/23663fdd-b72b-4956-9b86-5da768bcf9c1-kube-api-access-mnqqb\") pod \"redhat-operators-7b754\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.378021 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.602574 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7b754"] Dec 08 09:19:45 crc kubenswrapper[4662]: W1208 09:19:45.618927 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23663fdd_b72b_4956_9b86_5da768bcf9c1.slice/crio-4cadd8a871eb788b1b13c55a0ed884cf4fa72fbee46befd970d8f384dcfe15d4 WatchSource:0}: Error finding container 4cadd8a871eb788b1b13c55a0ed884cf4fa72fbee46befd970d8f384dcfe15d4: Status 404 returned error can't find the container with id 4cadd8a871eb788b1b13c55a0ed884cf4fa72fbee46befd970d8f384dcfe15d4 Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.809402 4662 generic.go:334] "Generic (PLEG): container finished" podID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerID="c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8" exitCode=0 Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.810496 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b754" event={"ID":"23663fdd-b72b-4956-9b86-5da768bcf9c1","Type":"ContainerDied","Data":"c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8"} Dec 08 09:19:45 crc kubenswrapper[4662]: I1208 09:19:45.810521 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b754" event={"ID":"23663fdd-b72b-4956-9b86-5da768bcf9c1","Type":"ContainerStarted","Data":"4cadd8a871eb788b1b13c55a0ed884cf4fa72fbee46befd970d8f384dcfe15d4"} Dec 08 09:19:47 crc kubenswrapper[4662]: I1208 09:19:47.823873 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b754" event={"ID":"23663fdd-b72b-4956-9b86-5da768bcf9c1","Type":"ContainerStarted","Data":"9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de"} Dec 08 09:19:48 crc kubenswrapper[4662]: I1208 09:19:48.832099 4662 generic.go:334] "Generic (PLEG): container finished" podID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerID="9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de" exitCode=0 Dec 08 09:19:48 crc kubenswrapper[4662]: I1208 09:19:48.832160 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b754" event={"ID":"23663fdd-b72b-4956-9b86-5da768bcf9c1","Type":"ContainerDied","Data":"9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de"} Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.464314 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-8lhzr"] Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.465553 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.473675 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-utilities\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.473715 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-catalog-content\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.474000 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd2hb\" (UniqueName: \"kubernetes.io/projected/f70914b3-e872-476a-b468-f380adce1373-kube-api-access-fd2hb\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.477480 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lhzr"] Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.574670 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd2hb\" (UniqueName: \"kubernetes.io/projected/f70914b3-e872-476a-b468-f380adce1373-kube-api-access-fd2hb\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.574764 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-utilities\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.574782 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-catalog-content\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.575492 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-catalog-content\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.575659 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-utilities\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.592095 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd2hb\" (UniqueName: \"kubernetes.io/projected/f70914b3-e872-476a-b468-f380adce1373-kube-api-access-fd2hb\") pod \"redhat-operators-8lhzr\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.784160 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:49 crc kubenswrapper[4662]: I1208 09:19:49.839937 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b754" event={"ID":"23663fdd-b72b-4956-9b86-5da768bcf9c1","Type":"ContainerStarted","Data":"eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8"} Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.204413 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7b754" podStartSLOduration=1.542171362 podStartE2EDuration="5.204395054s" podCreationTimestamp="2025-12-08 09:19:45 +0000 UTC" firstStartedPulling="2025-12-08 09:19:45.811346904 +0000 UTC m=+309.380374894" lastFinishedPulling="2025-12-08 09:19:49.473570576 +0000 UTC m=+313.042598586" observedRunningTime="2025-12-08 09:19:49.859787424 +0000 UTC m=+313.428815414" watchObservedRunningTime="2025-12-08 09:19:50.204395054 +0000 UTC m=+313.773423044" Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.206177 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-8lhzr"] Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.542481 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.845062 4662 generic.go:334] "Generic (PLEG): container finished" podID="f70914b3-e872-476a-b468-f380adce1373" containerID="6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6" exitCode=0 Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.845109 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lhzr" event={"ID":"f70914b3-e872-476a-b468-f380adce1373","Type":"ContainerDied","Data":"6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6"} Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.845404 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lhzr" event={"ID":"f70914b3-e872-476a-b468-f380adce1373","Type":"ContainerStarted","Data":"7119ff8c820d03550650f05567310a639d4ae77f60f31b6523f53fed9f24282e"} Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.980329 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:50 crc kubenswrapper[4662]: I1208 09:19:50.980466 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:51 crc kubenswrapper[4662]: I1208 09:19:51.019901 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:51 crc kubenswrapper[4662]: I1208 09:19:51.851443 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lhzr" event={"ID":"f70914b3-e872-476a-b468-f380adce1373","Type":"ContainerStarted","Data":"a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e"} Dec 08 09:19:51 crc kubenswrapper[4662]: I1208 09:19:51.893163 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:19:52 crc kubenswrapper[4662]: I1208 09:19:52.863793 4662 generic.go:334] "Generic (PLEG): container finished" podID="f70914b3-e872-476a-b468-f380adce1373" containerID="a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e" exitCode=0 Dec 08 09:19:52 crc kubenswrapper[4662]: I1208 09:19:52.865228 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lhzr" event={"ID":"f70914b3-e872-476a-b468-f380adce1373","Type":"ContainerDied","Data":"a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e"} Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.263587 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cr7wn"] Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.265228 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.269912 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cr7wn"] Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.441093 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgccc\" (UniqueName: \"kubernetes.io/projected/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-kube-api-access-sgccc\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.441158 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-catalog-content\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.441190 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-utilities\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.542462 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgccc\" (UniqueName: \"kubernetes.io/projected/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-kube-api-access-sgccc\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.542563 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-catalog-content\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.542601 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-utilities\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.543125 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-catalog-content\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.543162 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-utilities\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.559819 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgccc\" (UniqueName: \"kubernetes.io/projected/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-kube-api-access-sgccc\") pod \"redhat-operators-cr7wn\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.585313 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.885825 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lhzr" event={"ID":"f70914b3-e872-476a-b468-f380adce1373","Type":"ContainerStarted","Data":"7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6"} Dec 08 09:19:54 crc kubenswrapper[4662]: I1208 09:19:54.908863 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-8lhzr" podStartSLOduration=2.782200321 podStartE2EDuration="5.908843609s" podCreationTimestamp="2025-12-08 09:19:49 +0000 UTC" firstStartedPulling="2025-12-08 09:19:50.846638619 +0000 UTC m=+314.415666609" lastFinishedPulling="2025-12-08 09:19:53.973281907 +0000 UTC m=+317.542309897" observedRunningTime="2025-12-08 09:19:54.906171966 +0000 UTC m=+318.475199956" watchObservedRunningTime="2025-12-08 09:19:54.908843609 +0000 UTC m=+318.477871599" Dec 08 09:19:55 crc kubenswrapper[4662]: I1208 09:19:55.083848 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cr7wn"] Dec 08 09:19:55 crc kubenswrapper[4662]: I1208 09:19:55.378996 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:55 crc kubenswrapper[4662]: I1208 09:19:55.379312 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:55 crc kubenswrapper[4662]: I1208 09:19:55.429875 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:55 crc kubenswrapper[4662]: I1208 09:19:55.892169 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerStarted","Data":"07565f541d6f91406d3f82fc3ec56a31bc89f267e392ba326c30c06c80e4d13f"} Dec 08 09:19:55 crc kubenswrapper[4662]: I1208 09:19:55.948385 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:19:56 crc kubenswrapper[4662]: I1208 09:19:56.898521 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerStarted","Data":"caa4ef45fba22b6b56b6ee3f8f7bc45bc38f223418a5c1b7530a323b7d8c6d8f"} Dec 08 09:19:57 crc kubenswrapper[4662]: I1208 09:19:57.905217 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerID="caa4ef45fba22b6b56b6ee3f8f7bc45bc38f223418a5c1b7530a323b7d8c6d8f" exitCode=0 Dec 08 09:19:57 crc kubenswrapper[4662]: I1208 09:19:57.905320 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerDied","Data":"caa4ef45fba22b6b56b6ee3f8f7bc45bc38f223418a5c1b7530a323b7d8c6d8f"} Dec 08 09:19:58 crc kubenswrapper[4662]: I1208 09:19:58.913271 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerStarted","Data":"d71430b05f1ee3c1443923750e26a70e37b29ef0b611035d08a4e9a37403e23d"} Dec 08 09:19:58 crc kubenswrapper[4662]: I1208 09:19:58.934551 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Dec 08 09:19:59 crc kubenswrapper[4662]: I1208 09:19:59.240161 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984"] Dec 08 09:19:59 crc kubenswrapper[4662]: I1208 09:19:59.240375 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" podUID="39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" containerName="route-controller-manager" containerID="cri-o://a55294a6190b675127db4d35d87bd931e3e4058a3c47749d2482c929dda8fead" gracePeriod=30 Dec 08 09:19:59 crc kubenswrapper[4662]: I1208 09:19:59.784730 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:19:59 crc kubenswrapper[4662]: I1208 09:19:59.785019 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:20:00 crc kubenswrapper[4662]: I1208 09:20:00.127036 4662 patch_prober.go:28] interesting pod/route-controller-manager-75664bd6d9-9p984 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Dec 08 09:20:00 crc kubenswrapper[4662]: I1208 09:20:00.127090 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" podUID="39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Dec 08 09:20:00 crc kubenswrapper[4662]: I1208 09:20:00.851835 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-8lhzr" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="registry-server" probeResult="failure" output=< Dec 08 09:20:00 crc kubenswrapper[4662]: timeout: failed to connect service ":50051" within 1s Dec 08 09:20:00 crc kubenswrapper[4662]: > Dec 08 09:20:00 crc kubenswrapper[4662]: I1208 09:20:00.924319 4662 generic.go:334] "Generic (PLEG): container finished" podID="39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" containerID="a55294a6190b675127db4d35d87bd931e3e4058a3c47749d2482c929dda8fead" exitCode=0 Dec 08 09:20:00 crc kubenswrapper[4662]: I1208 09:20:00.924399 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" event={"ID":"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba","Type":"ContainerDied","Data":"a55294a6190b675127db4d35d87bd931e3e4058a3c47749d2482c929dda8fead"} Dec 08 09:20:00 crc kubenswrapper[4662]: I1208 09:20:00.926407 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerID="d71430b05f1ee3c1443923750e26a70e37b29ef0b611035d08a4e9a37403e23d" exitCode=0 Dec 08 09:20:00 crc kubenswrapper[4662]: I1208 09:20:00.926453 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerDied","Data":"d71430b05f1ee3c1443923750e26a70e37b29ef0b611035d08a4e9a37403e23d"} Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.476386 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.569836 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2"] Dec 08 09:20:01 crc kubenswrapper[4662]: E1208 09:20:01.570062 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" containerName="route-controller-manager" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.570075 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" containerName="route-controller-manager" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.570160 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" containerName="route-controller-manager" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.570488 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.598911 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2"] Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.661654 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-config\") pod \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.661712 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-client-ca\") pod \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.661771 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvfkw\" (UniqueName: \"kubernetes.io/projected/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-kube-api-access-pvfkw\") pod \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.661805 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-serving-cert\") pod \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\" (UID: \"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba\") " Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.661890 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5kf\" (UniqueName: \"kubernetes.io/projected/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-kube-api-access-dx5kf\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.662506 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" (UID: "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.662523 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-config" (OuterVolumeSpecName: "config") pod "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" (UID: "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.661908 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-config\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.662613 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-client-ca\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.662638 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-serving-cert\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.662894 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.662916 4662 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.668362 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-kube-api-access-pvfkw" (OuterVolumeSpecName: "kube-api-access-pvfkw") pod "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" (UID: "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba"). InnerVolumeSpecName "kube-api-access-pvfkw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.670970 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" (UID: "39c3f664-01b9-4f9a-ba3a-52ac30aef9ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.763333 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx5kf\" (UniqueName: \"kubernetes.io/projected/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-kube-api-access-dx5kf\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.763373 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-config\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.763398 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-client-ca\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.763417 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-serving-cert\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.763476 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.763491 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvfkw\" (UniqueName: \"kubernetes.io/projected/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba-kube-api-access-pvfkw\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.764790 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-client-ca\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.765609 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-config\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.768441 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-serving-cert\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.780993 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx5kf\" (UniqueName: \"kubernetes.io/projected/e878ebd7-10c2-4ddc-9c60-911f3adeccfe-kube-api-access-dx5kf\") pod \"route-controller-manager-77f4b48dbc-w5mn2\" (UID: \"e878ebd7-10c2-4ddc-9c60-911f3adeccfe\") " pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.893765 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.939187 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerStarted","Data":"f7a9b7a04287bc8225c16500bd40ee9f8f2c36050da1a17b62784d46f5430374"} Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.944451 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" event={"ID":"39c3f664-01b9-4f9a-ba3a-52ac30aef9ba","Type":"ContainerDied","Data":"17cc2fa78da78a7d12a9c944393539166e4bd36a5dcc95841ecdffc3e9cba1e6"} Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.944495 4662 scope.go:117] "RemoveContainer" containerID="a55294a6190b675127db4d35d87bd931e3e4058a3c47749d2482c929dda8fead" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.944605 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.959940 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cr7wn" podStartSLOduration=4.164586913 podStartE2EDuration="7.959923345s" podCreationTimestamp="2025-12-08 09:19:54 +0000 UTC" firstStartedPulling="2025-12-08 09:19:57.906706426 +0000 UTC m=+321.475734416" lastFinishedPulling="2025-12-08 09:20:01.702042858 +0000 UTC m=+325.271070848" observedRunningTime="2025-12-08 09:20:01.957320674 +0000 UTC m=+325.526348664" watchObservedRunningTime="2025-12-08 09:20:01.959923345 +0000 UTC m=+325.528951335" Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.980091 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984"] Dec 08 09:20:01 crc kubenswrapper[4662]: I1208 09:20:01.985347 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-75664bd6d9-9p984"] Dec 08 09:20:02 crc kubenswrapper[4662]: I1208 09:20:02.323934 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2"] Dec 08 09:20:02 crc kubenswrapper[4662]: W1208 09:20:02.329301 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode878ebd7_10c2_4ddc_9c60_911f3adeccfe.slice/crio-8e9d02a79c34b8f068e1ee1c97a3cbbfca8f1ea2997ebc02eb89057e025c363f WatchSource:0}: Error finding container 8e9d02a79c34b8f068e1ee1c97a3cbbfca8f1ea2997ebc02eb89057e025c363f: Status 404 returned error can't find the container with id 8e9d02a79c34b8f068e1ee1c97a3cbbfca8f1ea2997ebc02eb89057e025c363f Dec 08 09:20:02 crc kubenswrapper[4662]: I1208 09:20:02.704204 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39c3f664-01b9-4f9a-ba3a-52ac30aef9ba" path="/var/lib/kubelet/pods/39c3f664-01b9-4f9a-ba3a-52ac30aef9ba/volumes" Dec 08 09:20:02 crc kubenswrapper[4662]: I1208 09:20:02.950076 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" event={"ID":"e878ebd7-10c2-4ddc-9c60-911f3adeccfe","Type":"ContainerStarted","Data":"ff490762fd96fbb836a9a256bba777c6c310e824911ffbc506c0fa82b73b1a7f"} Dec 08 09:20:02 crc kubenswrapper[4662]: I1208 09:20:02.950348 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" event={"ID":"e878ebd7-10c2-4ddc-9c60-911f3adeccfe","Type":"ContainerStarted","Data":"8e9d02a79c34b8f068e1ee1c97a3cbbfca8f1ea2997ebc02eb89057e025c363f"} Dec 08 09:20:02 crc kubenswrapper[4662]: I1208 09:20:02.950822 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:02 crc kubenswrapper[4662]: I1208 09:20:02.955157 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" Dec 08 09:20:02 crc kubenswrapper[4662]: I1208 09:20:02.976962 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" podStartSLOduration=3.976948468 podStartE2EDuration="3.976948468s" podCreationTimestamp="2025-12-08 09:19:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:20:02.975212461 +0000 UTC m=+326.544240451" watchObservedRunningTime="2025-12-08 09:20:02.976948468 +0000 UTC m=+326.545976458" Dec 08 09:20:04 crc kubenswrapper[4662]: I1208 09:20:04.587000 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:20:04 crc kubenswrapper[4662]: I1208 09:20:04.587060 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:20:05 crc kubenswrapper[4662]: I1208 09:20:05.631042 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cr7wn" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="registry-server" probeResult="failure" output=< Dec 08 09:20:05 crc kubenswrapper[4662]: timeout: failed to connect service ":50051" within 1s Dec 08 09:20:05 crc kubenswrapper[4662]: > Dec 08 09:20:09 crc kubenswrapper[4662]: I1208 09:20:09.831630 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:20:09 crc kubenswrapper[4662]: I1208 09:20:09.890590 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:20:14 crc kubenswrapper[4662]: I1208 09:20:14.628734 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:20:14 crc kubenswrapper[4662]: I1208 09:20:14.674575 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.232018 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658fd5994d-7tf7p"] Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.232724 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" podUID="2573fbbb-f03f-4c85-b27f-49cfcd239ec3" containerName="controller-manager" containerID="cri-o://3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce" gracePeriod=30 Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.625969 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.798101 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s6fm\" (UniqueName: \"kubernetes.io/projected/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-kube-api-access-4s6fm\") pod \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.798145 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-config\") pod \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.798223 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-serving-cert\") pod \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.798246 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-client-ca\") pod \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.798278 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-proxy-ca-bundles\") pod \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\" (UID: \"2573fbbb-f03f-4c85-b27f-49cfcd239ec3\") " Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.799037 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2573fbbb-f03f-4c85-b27f-49cfcd239ec3" (UID: "2573fbbb-f03f-4c85-b27f-49cfcd239ec3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.799159 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-config" (OuterVolumeSpecName: "config") pod "2573fbbb-f03f-4c85-b27f-49cfcd239ec3" (UID: "2573fbbb-f03f-4c85-b27f-49cfcd239ec3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.799765 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-client-ca" (OuterVolumeSpecName: "client-ca") pod "2573fbbb-f03f-4c85-b27f-49cfcd239ec3" (UID: "2573fbbb-f03f-4c85-b27f-49cfcd239ec3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.810017 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2573fbbb-f03f-4c85-b27f-49cfcd239ec3" (UID: "2573fbbb-f03f-4c85-b27f-49cfcd239ec3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.810410 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-kube-api-access-4s6fm" (OuterVolumeSpecName: "kube-api-access-4s6fm") pod "2573fbbb-f03f-4c85-b27f-49cfcd239ec3" (UID: "2573fbbb-f03f-4c85-b27f-49cfcd239ec3"). InnerVolumeSpecName "kube-api-access-4s6fm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.899573 4662 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.899614 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s6fm\" (UniqueName: \"kubernetes.io/projected/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-kube-api-access-4s6fm\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.899630 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.899643 4662 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:19 crc kubenswrapper[4662]: I1208 09:20:19.899653 4662 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2573fbbb-f03f-4c85-b27f-49cfcd239ec3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.050123 4662 generic.go:334] "Generic (PLEG): container finished" podID="2573fbbb-f03f-4c85-b27f-49cfcd239ec3" containerID="3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce" exitCode=0 Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.050207 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.050234 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" event={"ID":"2573fbbb-f03f-4c85-b27f-49cfcd239ec3","Type":"ContainerDied","Data":"3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce"} Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.050516 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-658fd5994d-7tf7p" event={"ID":"2573fbbb-f03f-4c85-b27f-49cfcd239ec3","Type":"ContainerDied","Data":"e092e64aa14f59c8da43caf92fbea70cd6e4128b6fa447eb1119f0babccd4616"} Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.050542 4662 scope.go:117] "RemoveContainer" containerID="3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce" Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.939258 4662 scope.go:117] "RemoveContainer" containerID="3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce" Dec 08 09:20:20 crc kubenswrapper[4662]: E1208 09:20:20.940009 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce\": container with ID starting with 3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce not found: ID does not exist" containerID="3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce" Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.940041 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce"} err="failed to get container status \"3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce\": rpc error: code = NotFound desc = could not find container \"3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce\": container with ID starting with 3fb32e34a5aa7c879d5329155e3ef16f91c484c834aa1dc99ea87c3896aae3ce not found: ID does not exist" Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.961553 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-658fd5994d-7tf7p"] Dec 08 09:20:20 crc kubenswrapper[4662]: I1208 09:20:20.967680 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-658fd5994d-7tf7p"] Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.015805 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65f686b869-jhbg4"] Dec 08 09:20:21 crc kubenswrapper[4662]: E1208 09:20:21.016068 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2573fbbb-f03f-4c85-b27f-49cfcd239ec3" containerName="controller-manager" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.016089 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2573fbbb-f03f-4c85-b27f-49cfcd239ec3" containerName="controller-manager" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.016210 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2573fbbb-f03f-4c85-b27f-49cfcd239ec3" containerName="controller-manager" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.016662 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.019072 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.019414 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.019635 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.020187 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.020351 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.020494 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.028309 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65f686b869-jhbg4"] Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.031297 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.112686 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-config\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.112790 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj26r\" (UniqueName: \"kubernetes.io/projected/0c9a6bd3-cabb-4d85-bf32-29da280ff985-kube-api-access-tj26r\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.112817 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-proxy-ca-bundles\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.112852 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c9a6bd3-cabb-4d85-bf32-29da280ff985-serving-cert\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.112941 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-client-ca\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.214142 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-config\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.214195 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj26r\" (UniqueName: \"kubernetes.io/projected/0c9a6bd3-cabb-4d85-bf32-29da280ff985-kube-api-access-tj26r\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.214221 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-proxy-ca-bundles\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.214242 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c9a6bd3-cabb-4d85-bf32-29da280ff985-serving-cert\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.214274 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-client-ca\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.217195 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-client-ca\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.217508 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-proxy-ca-bundles\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.218509 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c9a6bd3-cabb-4d85-bf32-29da280ff985-config\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.226315 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c9a6bd3-cabb-4d85-bf32-29da280ff985-serving-cert\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.230790 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj26r\" (UniqueName: \"kubernetes.io/projected/0c9a6bd3-cabb-4d85-bf32-29da280ff985-kube-api-access-tj26r\") pod \"controller-manager-65f686b869-jhbg4\" (UID: \"0c9a6bd3-cabb-4d85-bf32-29da280ff985\") " pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.335785 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:21 crc kubenswrapper[4662]: I1208 09:20:21.532520 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65f686b869-jhbg4"] Dec 08 09:20:21 crc kubenswrapper[4662]: W1208 09:20:21.548641 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c9a6bd3_cabb_4d85_bf32_29da280ff985.slice/crio-d888299d2f938edc8923f1b80ada9a67168463a698db22e674b3290b0cc7354c WatchSource:0}: Error finding container d888299d2f938edc8923f1b80ada9a67168463a698db22e674b3290b0cc7354c: Status 404 returned error can't find the container with id d888299d2f938edc8923f1b80ada9a67168463a698db22e674b3290b0cc7354c Dec 08 09:20:22 crc kubenswrapper[4662]: I1208 09:20:22.061110 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" event={"ID":"0c9a6bd3-cabb-4d85-bf32-29da280ff985","Type":"ContainerStarted","Data":"281047d3e502b2eed38354d8f2bf441e3f81b53a72a9a8f62b040df6f6cb91ac"} Dec 08 09:20:22 crc kubenswrapper[4662]: I1208 09:20:22.061386 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:22 crc kubenswrapper[4662]: I1208 09:20:22.061398 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" event={"ID":"0c9a6bd3-cabb-4d85-bf32-29da280ff985","Type":"ContainerStarted","Data":"d888299d2f938edc8923f1b80ada9a67168463a698db22e674b3290b0cc7354c"} Dec 08 09:20:22 crc kubenswrapper[4662]: I1208 09:20:22.066940 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" Dec 08 09:20:22 crc kubenswrapper[4662]: I1208 09:20:22.088940 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65f686b869-jhbg4" podStartSLOduration=3.088891115 podStartE2EDuration="3.088891115s" podCreationTimestamp="2025-12-08 09:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:20:22.08265654 +0000 UTC m=+345.651684530" watchObservedRunningTime="2025-12-08 09:20:22.088891115 +0000 UTC m=+345.657919125" Dec 08 09:20:22 crc kubenswrapper[4662]: I1208 09:20:22.706126 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2573fbbb-f03f-4c85-b27f-49cfcd239ec3" path="/var/lib/kubelet/pods/2573fbbb-f03f-4c85-b27f-49cfcd239ec3/volumes" Dec 08 09:20:32 crc kubenswrapper[4662]: I1208 09:20:32.612000 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:20:32 crc kubenswrapper[4662]: I1208 09:20:32.612999 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.164803 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9ghnp"] Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.165837 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.182728 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9ghnp"] Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296734 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296828 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4651fe7-5244-40b6-9950-68f0411bb1b3-trusted-ca\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296859 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4651fe7-5244-40b6-9950-68f0411bb1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296891 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbcdr\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-kube-api-access-qbcdr\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296909 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4651fe7-5244-40b6-9950-68f0411bb1b3-registry-certificates\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296952 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4651fe7-5244-40b6-9950-68f0411bb1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296972 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-registry-tls\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.296999 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.337976 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.398900 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.398966 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4651fe7-5244-40b6-9950-68f0411bb1b3-trusted-ca\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.398993 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4651fe7-5244-40b6-9950-68f0411bb1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.399020 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbcdr\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-kube-api-access-qbcdr\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.399038 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4651fe7-5244-40b6-9950-68f0411bb1b3-registry-certificates\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.399512 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4651fe7-5244-40b6-9950-68f0411bb1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.400190 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/d4651fe7-5244-40b6-9950-68f0411bb1b3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.400892 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/d4651fe7-5244-40b6-9950-68f0411bb1b3-registry-certificates\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.402123 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-registry-tls\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.402818 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d4651fe7-5244-40b6-9950-68f0411bb1b3-trusted-ca\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.407675 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/d4651fe7-5244-40b6-9950-68f0411bb1b3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.412508 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-registry-tls\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.414932 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbcdr\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-kube-api-access-qbcdr\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.421638 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d4651fe7-5244-40b6-9950-68f0411bb1b3-bound-sa-token\") pod \"image-registry-66df7c8f76-9ghnp\" (UID: \"d4651fe7-5244-40b6-9950-68f0411bb1b3\") " pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.482603 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:34 crc kubenswrapper[4662]: I1208 09:20:34.900834 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-9ghnp"] Dec 08 09:20:35 crc kubenswrapper[4662]: I1208 09:20:35.131890 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" event={"ID":"d4651fe7-5244-40b6-9950-68f0411bb1b3","Type":"ContainerStarted","Data":"4fd829b685d107876f75c7b5ff6e83591daa4ea3929cad05eb76a95a0c039050"} Dec 08 09:20:35 crc kubenswrapper[4662]: I1208 09:20:35.132193 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:35 crc kubenswrapper[4662]: I1208 09:20:35.132223 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" event={"ID":"d4651fe7-5244-40b6-9950-68f0411bb1b3","Type":"ContainerStarted","Data":"de1b6667e2d70fdc12ddbddeaca01de5e912690273ba60a7701d434430396e2b"} Dec 08 09:20:35 crc kubenswrapper[4662]: I1208 09:20:35.161470 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" podStartSLOduration=1.161400711 podStartE2EDuration="1.161400711s" podCreationTimestamp="2025-12-08 09:20:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:20:35.155299659 +0000 UTC m=+358.724327689" watchObservedRunningTime="2025-12-08 09:20:35.161400711 +0000 UTC m=+358.730428711" Dec 08 09:20:54 crc kubenswrapper[4662]: I1208 09:20:54.490573 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-9ghnp" Dec 08 09:20:54 crc kubenswrapper[4662]: I1208 09:20:54.548014 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nw69"] Dec 08 09:21:02 crc kubenswrapper[4662]: I1208 09:21:02.611720 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:21:02 crc kubenswrapper[4662]: I1208 09:21:02.612295 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:21:19 crc kubenswrapper[4662]: I1208 09:21:19.586124 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" podUID="87f08450-5929-4441-88f4-fbaec18e0f73" containerName="registry" containerID="cri-o://fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0" gracePeriod=30 Dec 08 09:21:19 crc kubenswrapper[4662]: I1208 09:21:19.990845 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162104 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-bound-sa-token\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162187 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-registry-certificates\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162359 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162395 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-trusted-ca\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162421 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87f08450-5929-4441-88f4-fbaec18e0f73-ca-trust-extracted\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162482 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87f08450-5929-4441-88f4-fbaec18e0f73-installation-pull-secrets\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162513 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqw4z\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-kube-api-access-xqw4z\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162536 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-registry-tls\") pod \"87f08450-5929-4441-88f4-fbaec18e0f73\" (UID: \"87f08450-5929-4441-88f4-fbaec18e0f73\") " Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.162907 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.164019 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.168942 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-kube-api-access-xqw4z" (OuterVolumeSpecName: "kube-api-access-xqw4z") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "kube-api-access-xqw4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.171186 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.172359 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.172713 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87f08450-5929-4441-88f4-fbaec18e0f73-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.178083 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.181102 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f08450-5929-4441-88f4-fbaec18e0f73-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "87f08450-5929-4441-88f4-fbaec18e0f73" (UID: "87f08450-5929-4441-88f4-fbaec18e0f73"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.263866 4662 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.263906 4662 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.263916 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/87f08450-5929-4441-88f4-fbaec18e0f73-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.263943 4662 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/87f08450-5929-4441-88f4-fbaec18e0f73-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.263955 4662 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/87f08450-5929-4441-88f4-fbaec18e0f73-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.263963 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqw4z\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-kube-api-access-xqw4z\") on node \"crc\" DevicePath \"\"" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.263971 4662 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/87f08450-5929-4441-88f4-fbaec18e0f73-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.378885 4662 generic.go:334] "Generic (PLEG): container finished" podID="87f08450-5929-4441-88f4-fbaec18e0f73" containerID="fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0" exitCode=0 Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.378939 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" event={"ID":"87f08450-5929-4441-88f4-fbaec18e0f73","Type":"ContainerDied","Data":"fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0"} Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.378968 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" event={"ID":"87f08450-5929-4441-88f4-fbaec18e0f73","Type":"ContainerDied","Data":"bc6b78d83f108170f6150810201863d6a2d05b1de1f35a695d00df014cef352e"} Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.378995 4662 scope.go:117] "RemoveContainer" containerID="fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.379324 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4nw69" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.398852 4662 scope.go:117] "RemoveContainer" containerID="fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0" Dec 08 09:21:20 crc kubenswrapper[4662]: E1208 09:21:20.399697 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0\": container with ID starting with fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0 not found: ID does not exist" containerID="fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.399891 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0"} err="failed to get container status \"fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0\": rpc error: code = NotFound desc = could not find container \"fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0\": container with ID starting with fe7fc390be01f7ed1668376c897409a6656c9d67d0ed08c63b3df939d8d74da0 not found: ID does not exist" Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.416803 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nw69"] Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.420343 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4nw69"] Dec 08 09:21:20 crc kubenswrapper[4662]: I1208 09:21:20.706639 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f08450-5929-4441-88f4-fbaec18e0f73" path="/var/lib/kubelet/pods/87f08450-5929-4441-88f4-fbaec18e0f73/volumes" Dec 08 09:21:32 crc kubenswrapper[4662]: I1208 09:21:32.611358 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:21:32 crc kubenswrapper[4662]: I1208 09:21:32.614985 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:21:32 crc kubenswrapper[4662]: I1208 09:21:32.615777 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:21:32 crc kubenswrapper[4662]: I1208 09:21:32.619021 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e90e87142063402540b30d02a0f13b49b808a8a4a75b80f889930edd7b43a54f"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:21:32 crc kubenswrapper[4662]: I1208 09:21:32.619131 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://e90e87142063402540b30d02a0f13b49b808a8a4a75b80f889930edd7b43a54f" gracePeriod=600 Dec 08 09:21:33 crc kubenswrapper[4662]: I1208 09:21:33.457286 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="e90e87142063402540b30d02a0f13b49b808a8a4a75b80f889930edd7b43a54f" exitCode=0 Dec 08 09:21:33 crc kubenswrapper[4662]: I1208 09:21:33.457332 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"e90e87142063402540b30d02a0f13b49b808a8a4a75b80f889930edd7b43a54f"} Dec 08 09:21:33 crc kubenswrapper[4662]: I1208 09:21:33.457653 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"57a0d204dde3b59ba48b93d9451be527618c216af11a616391fcdba29c29b462"} Dec 08 09:21:33 crc kubenswrapper[4662]: I1208 09:21:33.457690 4662 scope.go:117] "RemoveContainer" containerID="14f74657a7a6dbbd854bbc05a80d9e495240396635cfb5774a2f9fe23e7dca28" Dec 08 09:23:32 crc kubenswrapper[4662]: I1208 09:23:32.612232 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:23:32 crc kubenswrapper[4662]: I1208 09:23:32.613175 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.592118 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-rdgcp"] Dec 08 09:23:58 crc kubenswrapper[4662]: E1208 09:23:58.592981 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f08450-5929-4441-88f4-fbaec18e0f73" containerName="registry" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.592999 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f08450-5929-4441-88f4-fbaec18e0f73" containerName="registry" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.593121 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f08450-5929-4441-88f4-fbaec18e0f73" containerName="registry" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.593596 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.597187 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rxxpp"] Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.598023 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rxxpp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.599186 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.599350 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.599477 4662 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-hml7s" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.600578 4662 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-g74pj" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.616057 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-rdgcp"] Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.630072 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rxxpp"] Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.645598 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v4r7x"] Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.646375 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.649169 4662 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-rl8gx" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.655874 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v4r7x"] Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.675810 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d62s9\" (UniqueName: \"kubernetes.io/projected/081cce8e-c3af-41d8-9146-5d62bbe487b8-kube-api-access-d62s9\") pod \"cert-manager-webhook-5655c58dd6-v4r7x\" (UID: \"081cce8e-c3af-41d8-9146-5d62bbe487b8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.675884 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9xm7\" (UniqueName: \"kubernetes.io/projected/befccb54-8a05-49dd-b709-b38fbdbd9a04-kube-api-access-s9xm7\") pod \"cert-manager-5b446d88c5-rxxpp\" (UID: \"befccb54-8a05-49dd-b709-b38fbdbd9a04\") " pod="cert-manager/cert-manager-5b446d88c5-rxxpp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.675931 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4d6r\" (UniqueName: \"kubernetes.io/projected/562e5ac7-c24f-4122-8b58-335957d2545c-kube-api-access-b4d6r\") pod \"cert-manager-cainjector-7f985d654d-rdgcp\" (UID: \"562e5ac7-c24f-4122-8b58-335957d2545c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.776984 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4d6r\" (UniqueName: \"kubernetes.io/projected/562e5ac7-c24f-4122-8b58-335957d2545c-kube-api-access-b4d6r\") pod \"cert-manager-cainjector-7f985d654d-rdgcp\" (UID: \"562e5ac7-c24f-4122-8b58-335957d2545c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.777086 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d62s9\" (UniqueName: \"kubernetes.io/projected/081cce8e-c3af-41d8-9146-5d62bbe487b8-kube-api-access-d62s9\") pod \"cert-manager-webhook-5655c58dd6-v4r7x\" (UID: \"081cce8e-c3af-41d8-9146-5d62bbe487b8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.777435 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9xm7\" (UniqueName: \"kubernetes.io/projected/befccb54-8a05-49dd-b709-b38fbdbd9a04-kube-api-access-s9xm7\") pod \"cert-manager-5b446d88c5-rxxpp\" (UID: \"befccb54-8a05-49dd-b709-b38fbdbd9a04\") " pod="cert-manager/cert-manager-5b446d88c5-rxxpp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.797035 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d62s9\" (UniqueName: \"kubernetes.io/projected/081cce8e-c3af-41d8-9146-5d62bbe487b8-kube-api-access-d62s9\") pod \"cert-manager-webhook-5655c58dd6-v4r7x\" (UID: \"081cce8e-c3af-41d8-9146-5d62bbe487b8\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.797561 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9xm7\" (UniqueName: \"kubernetes.io/projected/befccb54-8a05-49dd-b709-b38fbdbd9a04-kube-api-access-s9xm7\") pod \"cert-manager-5b446d88c5-rxxpp\" (UID: \"befccb54-8a05-49dd-b709-b38fbdbd9a04\") " pod="cert-manager/cert-manager-5b446d88c5-rxxpp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.814532 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4d6r\" (UniqueName: \"kubernetes.io/projected/562e5ac7-c24f-4122-8b58-335957d2545c-kube-api-access-b4d6r\") pod \"cert-manager-cainjector-7f985d654d-rdgcp\" (UID: \"562e5ac7-c24f-4122-8b58-335957d2545c\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.920583 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.927733 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-rxxpp" Dec 08 09:23:58 crc kubenswrapper[4662]: I1208 09:23:58.960454 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" Dec 08 09:23:59 crc kubenswrapper[4662]: I1208 09:23:59.167680 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-rxxpp"] Dec 08 09:23:59 crc kubenswrapper[4662]: I1208 09:23:59.184191 4662 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 09:23:59 crc kubenswrapper[4662]: I1208 09:23:59.210705 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-rdgcp"] Dec 08 09:23:59 crc kubenswrapper[4662]: W1208 09:23:59.220029 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod562e5ac7_c24f_4122_8b58_335957d2545c.slice/crio-2f1138562d9faf2a718940dd2d23b1f28e907b0b9d91c27ce48b95ed9a1ffe16 WatchSource:0}: Error finding container 2f1138562d9faf2a718940dd2d23b1f28e907b0b9d91c27ce48b95ed9a1ffe16: Status 404 returned error can't find the container with id 2f1138562d9faf2a718940dd2d23b1f28e907b0b9d91c27ce48b95ed9a1ffe16 Dec 08 09:23:59 crc kubenswrapper[4662]: I1208 09:23:59.249700 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-v4r7x"] Dec 08 09:23:59 crc kubenswrapper[4662]: W1208 09:23:59.251594 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod081cce8e_c3af_41d8_9146_5d62bbe487b8.slice/crio-8c09b269e9445a6218e3c87e9133f8cf9d923356f0125dbaa83f5eafaba7cd05 WatchSource:0}: Error finding container 8c09b269e9445a6218e3c87e9133f8cf9d923356f0125dbaa83f5eafaba7cd05: Status 404 returned error can't find the container with id 8c09b269e9445a6218e3c87e9133f8cf9d923356f0125dbaa83f5eafaba7cd05 Dec 08 09:23:59 crc kubenswrapper[4662]: I1208 09:23:59.282902 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" event={"ID":"081cce8e-c3af-41d8-9146-5d62bbe487b8","Type":"ContainerStarted","Data":"8c09b269e9445a6218e3c87e9133f8cf9d923356f0125dbaa83f5eafaba7cd05"} Dec 08 09:23:59 crc kubenswrapper[4662]: I1208 09:23:59.283722 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" event={"ID":"562e5ac7-c24f-4122-8b58-335957d2545c","Type":"ContainerStarted","Data":"2f1138562d9faf2a718940dd2d23b1f28e907b0b9d91c27ce48b95ed9a1ffe16"} Dec 08 09:23:59 crc kubenswrapper[4662]: I1208 09:23:59.287994 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rxxpp" event={"ID":"befccb54-8a05-49dd-b709-b38fbdbd9a04","Type":"ContainerStarted","Data":"486442459d2b14d94233fc79501f4b69121cdc2a95cf49fa3ee9f05a4cafb012"} Dec 08 09:24:02 crc kubenswrapper[4662]: I1208 09:24:02.611574 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:24:02 crc kubenswrapper[4662]: I1208 09:24:02.612202 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:24:03 crc kubenswrapper[4662]: I1208 09:24:03.331622 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" event={"ID":"081cce8e-c3af-41d8-9146-5d62bbe487b8","Type":"ContainerStarted","Data":"c858586cc046fe132f24d6e014379f97406c31ea62a1e19d3a43ae7a5612b956"} Dec 08 09:24:03 crc kubenswrapper[4662]: I1208 09:24:03.332136 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" Dec 08 09:24:03 crc kubenswrapper[4662]: I1208 09:24:03.335322 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" event={"ID":"562e5ac7-c24f-4122-8b58-335957d2545c","Type":"ContainerStarted","Data":"eeea6d16efc68673c3f8077b44e566dff79f5efedf6e1833fa7f2e6ac979a7cd"} Dec 08 09:24:03 crc kubenswrapper[4662]: I1208 09:24:03.337828 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-rxxpp" event={"ID":"befccb54-8a05-49dd-b709-b38fbdbd9a04","Type":"ContainerStarted","Data":"90cd5767999462c6903bcac1f60cee2d049d7ca69aa6f8f727b1aa6aa7c0a6a5"} Dec 08 09:24:03 crc kubenswrapper[4662]: I1208 09:24:03.352691 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" podStartSLOduration=2.403301229 podStartE2EDuration="5.352668037s" podCreationTimestamp="2025-12-08 09:23:58 +0000 UTC" firstStartedPulling="2025-12-08 09:23:59.254074401 +0000 UTC m=+562.823102391" lastFinishedPulling="2025-12-08 09:24:02.203441209 +0000 UTC m=+565.772469199" observedRunningTime="2025-12-08 09:24:03.345501806 +0000 UTC m=+566.914529826" watchObservedRunningTime="2025-12-08 09:24:03.352668037 +0000 UTC m=+566.921696017" Dec 08 09:24:03 crc kubenswrapper[4662]: I1208 09:24:03.366643 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-rdgcp" podStartSLOduration=2.393788006 podStartE2EDuration="5.366626789s" podCreationTimestamp="2025-12-08 09:23:58 +0000 UTC" firstStartedPulling="2025-12-08 09:23:59.223595039 +0000 UTC m=+562.792623029" lastFinishedPulling="2025-12-08 09:24:02.196433822 +0000 UTC m=+565.765461812" observedRunningTime="2025-12-08 09:24:03.363773393 +0000 UTC m=+566.932801383" watchObservedRunningTime="2025-12-08 09:24:03.366626789 +0000 UTC m=+566.935654779" Dec 08 09:24:08 crc kubenswrapper[4662]: I1208 09:24:08.963607 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-v4r7x" Dec 08 09:24:08 crc kubenswrapper[4662]: I1208 09:24:08.994681 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-rxxpp" podStartSLOduration=7.901188457 podStartE2EDuration="10.994651366s" podCreationTimestamp="2025-12-08 09:23:58 +0000 UTC" firstStartedPulling="2025-12-08 09:23:59.183936132 +0000 UTC m=+562.752964122" lastFinishedPulling="2025-12-08 09:24:02.277399041 +0000 UTC m=+565.846427031" observedRunningTime="2025-12-08 09:24:03.383201281 +0000 UTC m=+566.952229271" watchObservedRunningTime="2025-12-08 09:24:08.994651366 +0000 UTC m=+572.563679376" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.070486 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fhz87"] Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.070951 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-controller" containerID="cri-o://80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.071056 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="northd" containerID="cri-o://0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.071120 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-acl-logging" containerID="cri-o://cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.071075 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="sbdb" containerID="cri-o://bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.071066 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-node" containerID="cri-o://d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.071078 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.071725 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="nbdb" containerID="cri-o://110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.129266 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" containerID="cri-o://2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c" gracePeriod=30 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.385886 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovnkube-controller/3.log" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.388077 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovn-acl-logging/0.log" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.388505 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovn-controller/0.log" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.388901 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c" exitCode=0 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.388987 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc" exitCode=0 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389043 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088" exitCode=0 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389091 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed" exitCode=143 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389140 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03" exitCode=143 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389248 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c"} Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389328 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc"} Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389429 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088"} Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389507 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed"} Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389587 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03"} Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.389652 4662 scope.go:117] "RemoveContainer" containerID="c7f80d74a3a1630f5692043c83886b0e8d1cf7d5087a7b5083ec3a06ea14cddd" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.391364 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/2.log" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.393070 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/1.log" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.393114 4662 generic.go:334] "Generic (PLEG): container finished" podID="adeadc12-d6e2-4168-a1c0-de79d16c8de9" containerID="ee2d7f73f6fa58baaed744f52045b57227c634e502ca8143f4dd2f7011117cbb" exitCode=2 Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.393146 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerDied","Data":"ee2d7f73f6fa58baaed744f52045b57227c634e502ca8143f4dd2f7011117cbb"} Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.393710 4662 scope.go:117] "RemoveContainer" containerID="ee2d7f73f6fa58baaed744f52045b57227c634e502ca8143f4dd2f7011117cbb" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.393974 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-92hkj_openshift-multus(adeadc12-d6e2-4168-a1c0-de79d16c8de9)\"" pod="openshift-multus/multus-92hkj" podUID="adeadc12-d6e2-4168-a1c0-de79d16c8de9" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.422835 4662 scope.go:117] "RemoveContainer" containerID="1c8bc8109d09bbc21559064b6bb6d9e2c2d7d2cc5409e1a80081a423680e4027" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.772038 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovn-acl-logging/0.log" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.772604 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovn-controller/0.log" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.773122 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828463 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-ovn\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828537 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-script-lib\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828563 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-var-lib-openvswitch\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828587 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-var-lib-cni-networks-ovn-kubernetes\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828610 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-log-socket\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828637 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-systemd\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828661 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-config\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828684 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-bin\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828702 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-etc-openvswitch\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828725 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-kubelet\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828771 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovn-node-metrics-cert\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828792 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-netd\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828817 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-systemd-units\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828835 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-node-log\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828870 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-netns\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828915 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-slash\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828935 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-env-overrides\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828966 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-openvswitch\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.828995 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcn6q\" (UniqueName: \"kubernetes.io/projected/8d221fdb-50ee-4a2a-9db5-30e79f604466-kube-api-access-zcn6q\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829014 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-ovn-kubernetes\") pod \"8d221fdb-50ee-4a2a-9db5-30e79f604466\" (UID: \"8d221fdb-50ee-4a2a-9db5-30e79f604466\") " Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829300 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829336 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829704 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829706 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829722 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829753 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829774 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-node-log" (OuterVolumeSpecName: "node-log") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829785 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829796 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-log-socket" (OuterVolumeSpecName: "log-socket") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829816 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829858 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829880 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-slash" (OuterVolumeSpecName: "host-slash") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.829904 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.830031 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.830116 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.830160 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.830192 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.832415 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-949qw"] Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.832846 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="northd" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.832889 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="northd" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.832922 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.832940 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.832962 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.832979 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.832993 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833005 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833020 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833034 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833054 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="nbdb" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833069 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="nbdb" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833092 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-node" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833108 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-node" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833132 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="sbdb" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833148 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="sbdb" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833176 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-acl-logging" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833191 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-acl-logging" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833248 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833265 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833290 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kubecfg-setup" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833306 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kubecfg-setup" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833324 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833340 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: E1208 09:24:09.833363 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833379 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.833598 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="sbdb" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834015 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="nbdb" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834035 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="northd" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834104 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-node" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834125 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834139 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834151 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834167 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834188 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834204 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovn-acl-logging" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834656 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.834693 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerName="ovnkube-controller" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.837383 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d221fdb-50ee-4a2a-9db5-30e79f604466-kube-api-access-zcn6q" (OuterVolumeSpecName: "kube-api-access-zcn6q") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "kube-api-access-zcn6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.837620 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.839069 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.846448 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "8d221fdb-50ee-4a2a-9db5-30e79f604466" (UID: "8d221fdb-50ee-4a2a-9db5-30e79f604466"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.929841 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wpm4\" (UniqueName: \"kubernetes.io/projected/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-kube-api-access-9wpm4\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930092 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-etc-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930204 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovn-node-metrics-cert\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930289 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-run-netns\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930383 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930466 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-kubelet\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930575 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-node-log\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930652 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-cni-bin\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930734 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-var-lib-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930847 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-cni-netd\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.930929 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-systemd-units\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931043 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovnkube-config\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931129 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-slash\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931216 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovnkube-script-lib\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931316 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931415 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-ovn\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931499 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-env-overrides\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931590 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931667 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-systemd\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931810 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-log-socket\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.931980 4662 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932068 4662 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932140 4662 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932212 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcn6q\" (UniqueName: \"kubernetes.io/projected/8d221fdb-50ee-4a2a-9db5-30e79f604466-kube-api-access-zcn6q\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932285 4662 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932352 4662 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932420 4662 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932488 4662 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932557 4662 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932637 4662 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932707 4662 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932795 4662 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932851 4662 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932903 4662 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.932954 4662 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.933007 4662 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d221fdb-50ee-4a2a-9db5-30e79f604466-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.933057 4662 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.933107 4662 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.933160 4662 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:09 crc kubenswrapper[4662]: I1208 09:24:09.933273 4662 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/8d221fdb-50ee-4a2a-9db5-30e79f604466-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.034420 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.035584 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-ovn\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.035700 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-ovn\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.035818 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-env-overrides\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.035910 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-systemd\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.035999 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036115 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-log-socket\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036228 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wpm4\" (UniqueName: \"kubernetes.io/projected/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-kube-api-access-9wpm4\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036336 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-etc-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036433 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-run-netns\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036529 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovn-node-metrics-cert\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036633 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036734 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-kubelet\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.036895 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-node-log\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.037085 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-cni-bin\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.037190 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-systemd-units\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.037318 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-var-lib-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.037422 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-cni-netd\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.037523 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-slash\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.037620 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovnkube-config\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.037715 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovnkube-script-lib\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.034943 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.040962 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-env-overrides\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.040988 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-slash\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.040994 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-var-lib-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.040372 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovnkube-script-lib\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.040992 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-kubelet\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041036 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-etc-openvswitch\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041063 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-run-netns\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041079 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041161 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-run-ovn-kubernetes\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041256 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-log-socket\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041417 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-node-log\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041433 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-run-systemd\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041434 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-cni-bin\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041608 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-host-cni-netd\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.041648 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-systemd-units\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.042688 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovnkube-config\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.048260 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-ovn-node-metrics-cert\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.062669 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wpm4\" (UniqueName: \"kubernetes.io/projected/d5914b7f-4739-4406-9bcd-e9642e6c1b9a-kube-api-access-9wpm4\") pod \"ovnkube-node-949qw\" (UID: \"d5914b7f-4739-4406-9bcd-e9642e6c1b9a\") " pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.151065 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:10 crc kubenswrapper[4662]: W1208 09:24:10.173061 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5914b7f_4739_4406_9bcd_e9642e6c1b9a.slice/crio-f05cd58a143dd391f471dab101ae5ada6086f0e9669aff342b0fd59f4f925b12 WatchSource:0}: Error finding container f05cd58a143dd391f471dab101ae5ada6086f0e9669aff342b0fd59f4f925b12: Status 404 returned error can't find the container with id f05cd58a143dd391f471dab101ae5ada6086f0e9669aff342b0fd59f4f925b12 Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.403248 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovn-acl-logging/0.log" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.403899 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-fhz87_8d221fdb-50ee-4a2a-9db5-30e79f604466/ovn-controller/0.log" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404201 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc" exitCode=0 Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404224 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa" exitCode=0 Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404233 4662 generic.go:334] "Generic (PLEG): container finished" podID="8d221fdb-50ee-4a2a-9db5-30e79f604466" containerID="0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9" exitCode=0 Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404230 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc"} Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404262 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa"} Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404266 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404284 4662 scope.go:117] "RemoveContainer" containerID="2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404273 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9"} Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.404381 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-fhz87" event={"ID":"8d221fdb-50ee-4a2a-9db5-30e79f604466","Type":"ContainerDied","Data":"2b12324e25d3dc32f97d7c4e653f6b9fadb7a16ccad359400cb4b43c51eef36d"} Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.405889 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/2.log" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.407870 4662 generic.go:334] "Generic (PLEG): container finished" podID="d5914b7f-4739-4406-9bcd-e9642e6c1b9a" containerID="7c94597d708172ef298c61bceba586f63543ba63ead56ad7b48aac1eb5329aa7" exitCode=0 Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.407908 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerDied","Data":"7c94597d708172ef298c61bceba586f63543ba63ead56ad7b48aac1eb5329aa7"} Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.407931 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"f05cd58a143dd391f471dab101ae5ada6086f0e9669aff342b0fd59f4f925b12"} Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.423676 4662 scope.go:117] "RemoveContainer" containerID="bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.450927 4662 scope.go:117] "RemoveContainer" containerID="110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.475568 4662 scope.go:117] "RemoveContainer" containerID="0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.476260 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fhz87"] Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.482285 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-fhz87"] Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.509968 4662 scope.go:117] "RemoveContainer" containerID="86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.535460 4662 scope.go:117] "RemoveContainer" containerID="d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.550365 4662 scope.go:117] "RemoveContainer" containerID="cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.565906 4662 scope.go:117] "RemoveContainer" containerID="80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.585849 4662 scope.go:117] "RemoveContainer" containerID="e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.613475 4662 scope.go:117] "RemoveContainer" containerID="2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.614420 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c\": container with ID starting with 2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c not found: ID does not exist" containerID="2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.614456 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c"} err="failed to get container status \"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c\": rpc error: code = NotFound desc = could not find container \"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c\": container with ID starting with 2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.614480 4662 scope.go:117] "RemoveContainer" containerID="bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.614726 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\": container with ID starting with bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc not found: ID does not exist" containerID="bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.614772 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc"} err="failed to get container status \"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\": rpc error: code = NotFound desc = could not find container \"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\": container with ID starting with bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.614789 4662 scope.go:117] "RemoveContainer" containerID="110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.615036 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\": container with ID starting with 110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa not found: ID does not exist" containerID="110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615062 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa"} err="failed to get container status \"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\": rpc error: code = NotFound desc = could not find container \"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\": container with ID starting with 110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615077 4662 scope.go:117] "RemoveContainer" containerID="0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.615289 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\": container with ID starting with 0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9 not found: ID does not exist" containerID="0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615314 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9"} err="failed to get container status \"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\": rpc error: code = NotFound desc = could not find container \"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\": container with ID starting with 0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615329 4662 scope.go:117] "RemoveContainer" containerID="86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.615518 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\": container with ID starting with 86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc not found: ID does not exist" containerID="86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615536 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc"} err="failed to get container status \"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\": rpc error: code = NotFound desc = could not find container \"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\": container with ID starting with 86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615553 4662 scope.go:117] "RemoveContainer" containerID="d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.615735 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\": container with ID starting with d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088 not found: ID does not exist" containerID="d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615774 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088"} err="failed to get container status \"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\": rpc error: code = NotFound desc = could not find container \"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\": container with ID starting with d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.615793 4662 scope.go:117] "RemoveContainer" containerID="cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.615983 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\": container with ID starting with cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed not found: ID does not exist" containerID="cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616001 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed"} err="failed to get container status \"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\": rpc error: code = NotFound desc = could not find container \"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\": container with ID starting with cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616015 4662 scope.go:117] "RemoveContainer" containerID="80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.616177 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\": container with ID starting with 80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03 not found: ID does not exist" containerID="80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616197 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03"} err="failed to get container status \"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\": rpc error: code = NotFound desc = could not find container \"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\": container with ID starting with 80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616208 4662 scope.go:117] "RemoveContainer" containerID="e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c" Dec 08 09:24:10 crc kubenswrapper[4662]: E1208 09:24:10.616376 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\": container with ID starting with e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c not found: ID does not exist" containerID="e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616394 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c"} err="failed to get container status \"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\": rpc error: code = NotFound desc = could not find container \"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\": container with ID starting with e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616405 4662 scope.go:117] "RemoveContainer" containerID="2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616562 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c"} err="failed to get container status \"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c\": rpc error: code = NotFound desc = could not find container \"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c\": container with ID starting with 2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616580 4662 scope.go:117] "RemoveContainer" containerID="bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616755 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc"} err="failed to get container status \"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\": rpc error: code = NotFound desc = could not find container \"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\": container with ID starting with bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.616774 4662 scope.go:117] "RemoveContainer" containerID="110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617007 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa"} err="failed to get container status \"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\": rpc error: code = NotFound desc = could not find container \"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\": container with ID starting with 110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617027 4662 scope.go:117] "RemoveContainer" containerID="0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617234 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9"} err="failed to get container status \"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\": rpc error: code = NotFound desc = could not find container \"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\": container with ID starting with 0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617256 4662 scope.go:117] "RemoveContainer" containerID="86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617475 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc"} err="failed to get container status \"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\": rpc error: code = NotFound desc = could not find container \"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\": container with ID starting with 86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617498 4662 scope.go:117] "RemoveContainer" containerID="d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617694 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088"} err="failed to get container status \"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\": rpc error: code = NotFound desc = could not find container \"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\": container with ID starting with d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.617729 4662 scope.go:117] "RemoveContainer" containerID="cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618060 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed"} err="failed to get container status \"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\": rpc error: code = NotFound desc = could not find container \"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\": container with ID starting with cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618104 4662 scope.go:117] "RemoveContainer" containerID="80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618349 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03"} err="failed to get container status \"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\": rpc error: code = NotFound desc = could not find container \"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\": container with ID starting with 80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618378 4662 scope.go:117] "RemoveContainer" containerID="e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618588 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c"} err="failed to get container status \"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\": rpc error: code = NotFound desc = could not find container \"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\": container with ID starting with e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618616 4662 scope.go:117] "RemoveContainer" containerID="2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618924 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c"} err="failed to get container status \"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c\": rpc error: code = NotFound desc = could not find container \"2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c\": container with ID starting with 2401fa45508ce5c55109c5cf63513fed7027db6af0c0fe89acc85b5b8d84cd3c not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.618943 4662 scope.go:117] "RemoveContainer" containerID="bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.619251 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc"} err="failed to get container status \"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\": rpc error: code = NotFound desc = could not find container \"bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc\": container with ID starting with bef4651a8d70b5a502a61cce1882535dbdf93a9d1b86cee049095706ba9085dc not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.619273 4662 scope.go:117] "RemoveContainer" containerID="110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.619533 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa"} err="failed to get container status \"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\": rpc error: code = NotFound desc = could not find container \"110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa\": container with ID starting with 110dcecee268b6a792a0e6be3f53eee452fcdb324701856f3fa45b6c3e2f4bfa not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.619552 4662 scope.go:117] "RemoveContainer" containerID="0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.619719 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9"} err="failed to get container status \"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\": rpc error: code = NotFound desc = could not find container \"0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9\": container with ID starting with 0fa3662b59f8c6f21683c9d0a26b06319d63f0f45f9e9073ba00fdbc1d6ff6f9 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.619758 4662 scope.go:117] "RemoveContainer" containerID="86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.620194 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc"} err="failed to get container status \"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\": rpc error: code = NotFound desc = could not find container \"86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc\": container with ID starting with 86a7faefe259222134ac5e268d5b09cc5991a7a1db959a676f9bc84815965ffc not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.620219 4662 scope.go:117] "RemoveContainer" containerID="d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.621367 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088"} err="failed to get container status \"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\": rpc error: code = NotFound desc = could not find container \"d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088\": container with ID starting with d7e3593089b779a9a8c24f9b1f952d10a5c5cb0101e6898987eb70cabb060088 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.621388 4662 scope.go:117] "RemoveContainer" containerID="cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.621700 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed"} err="failed to get container status \"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\": rpc error: code = NotFound desc = could not find container \"cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed\": container with ID starting with cd195222dce2b9d2a7d0b42edaf08f90492d2d26f1e76a65afc876d83182e0ed not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.621751 4662 scope.go:117] "RemoveContainer" containerID="80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.622091 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03"} err="failed to get container status \"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\": rpc error: code = NotFound desc = could not find container \"80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03\": container with ID starting with 80ecebdb14e77cbe0927f860702ceb7acc93c27e79ad6714af51fe58937c0e03 not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.622122 4662 scope.go:117] "RemoveContainer" containerID="e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.624015 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c"} err="failed to get container status \"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\": rpc error: code = NotFound desc = could not find container \"e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c\": container with ID starting with e3e5eb6e85ca3ec3ad14eace21d9c8217f3392d124395c9a0c107f496de8b30c not found: ID does not exist" Dec 08 09:24:10 crc kubenswrapper[4662]: I1208 09:24:10.704942 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d221fdb-50ee-4a2a-9db5-30e79f604466" path="/var/lib/kubelet/pods/8d221fdb-50ee-4a2a-9db5-30e79f604466/volumes" Dec 08 09:24:11 crc kubenswrapper[4662]: I1208 09:24:11.417401 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"5c2688588e101619cdc1b25487454c193ecb36ecfba8236ef68a1286c30a6af7"} Dec 08 09:24:11 crc kubenswrapper[4662]: I1208 09:24:11.417727 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"5e0ef93766c2379d9d3f99b7981f6fba49af7623ea0a47aa367a5368ed9a185a"} Dec 08 09:24:11 crc kubenswrapper[4662]: I1208 09:24:11.417773 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"0d8333c17ad931ed370a7445af9cbdf47f6a832e4ca45cb6355742db090b7ebb"} Dec 08 09:24:11 crc kubenswrapper[4662]: I1208 09:24:11.417786 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"aea05835248d88e1296e85602a8787f173e9440b98bf3ddb745477ba47103f71"} Dec 08 09:24:11 crc kubenswrapper[4662]: I1208 09:24:11.417799 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"8b9fb70583917f980dd9713157750ce57f4cd99e199ec722c0ebd31f237e153a"} Dec 08 09:24:11 crc kubenswrapper[4662]: I1208 09:24:11.417809 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"2bfa2703dc825f33de1c3fc7602e23f8ce92f528224db550a814a78a3038bcf6"} Dec 08 09:24:13 crc kubenswrapper[4662]: I1208 09:24:13.435385 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"191021a0b840f73b69f99857222b1ff4119a82543e5357325b2ffc88a63d2a2e"} Dec 08 09:24:16 crc kubenswrapper[4662]: I1208 09:24:16.452840 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" event={"ID":"d5914b7f-4739-4406-9bcd-e9642e6c1b9a","Type":"ContainerStarted","Data":"f3b35dd75024cf60e362249d94b8de00e34ad4839cd9ebf3c4e123468ff031ad"} Dec 08 09:24:16 crc kubenswrapper[4662]: I1208 09:24:16.453353 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:16 crc kubenswrapper[4662]: I1208 09:24:16.453370 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:16 crc kubenswrapper[4662]: I1208 09:24:16.480710 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:16 crc kubenswrapper[4662]: I1208 09:24:16.486048 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" podStartSLOduration=7.4860313099999996 podStartE2EDuration="7.48603131s" podCreationTimestamp="2025-12-08 09:24:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:24:16.484715945 +0000 UTC m=+580.053743935" watchObservedRunningTime="2025-12-08 09:24:16.48603131 +0000 UTC m=+580.055059300" Dec 08 09:24:17 crc kubenswrapper[4662]: I1208 09:24:17.458427 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:17 crc kubenswrapper[4662]: I1208 09:24:17.494954 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:21 crc kubenswrapper[4662]: I1208 09:24:21.697824 4662 scope.go:117] "RemoveContainer" containerID="ee2d7f73f6fa58baaed744f52045b57227c634e502ca8143f4dd2f7011117cbb" Dec 08 09:24:21 crc kubenswrapper[4662]: E1208 09:24:21.698311 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-92hkj_openshift-multus(adeadc12-d6e2-4168-a1c0-de79d16c8de9)\"" pod="openshift-multus/multus-92hkj" podUID="adeadc12-d6e2-4168-a1c0-de79d16c8de9" Dec 08 09:24:32 crc kubenswrapper[4662]: I1208 09:24:32.611367 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:24:32 crc kubenswrapper[4662]: I1208 09:24:32.613955 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:24:32 crc kubenswrapper[4662]: I1208 09:24:32.614041 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:24:32 crc kubenswrapper[4662]: I1208 09:24:32.614860 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"57a0d204dde3b59ba48b93d9451be527618c216af11a616391fcdba29c29b462"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:24:32 crc kubenswrapper[4662]: I1208 09:24:32.614991 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://57a0d204dde3b59ba48b93d9451be527618c216af11a616391fcdba29c29b462" gracePeriod=600 Dec 08 09:24:33 crc kubenswrapper[4662]: I1208 09:24:33.554946 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="57a0d204dde3b59ba48b93d9451be527618c216af11a616391fcdba29c29b462" exitCode=0 Dec 08 09:24:33 crc kubenswrapper[4662]: I1208 09:24:33.554987 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"57a0d204dde3b59ba48b93d9451be527618c216af11a616391fcdba29c29b462"} Dec 08 09:24:33 crc kubenswrapper[4662]: I1208 09:24:33.555317 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"d950f79d0061a93dd2f9e3d3caab4b8f10f8ead0de736eba822a73ae528aea9e"} Dec 08 09:24:33 crc kubenswrapper[4662]: I1208 09:24:33.555344 4662 scope.go:117] "RemoveContainer" containerID="e90e87142063402540b30d02a0f13b49b808a8a4a75b80f889930edd7b43a54f" Dec 08 09:24:33 crc kubenswrapper[4662]: I1208 09:24:33.698433 4662 scope.go:117] "RemoveContainer" containerID="ee2d7f73f6fa58baaed744f52045b57227c634e502ca8143f4dd2f7011117cbb" Dec 08 09:24:34 crc kubenswrapper[4662]: I1208 09:24:34.562868 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-92hkj_adeadc12-d6e2-4168-a1c0-de79d16c8de9/kube-multus/2.log" Dec 08 09:24:34 crc kubenswrapper[4662]: I1208 09:24:34.563335 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-92hkj" event={"ID":"adeadc12-d6e2-4168-a1c0-de79d16c8de9","Type":"ContainerStarted","Data":"11a52c7e2acd2cb5bedf0bf602b201d8ed0b90ef8c5aa623fd072675df7dee2b"} Dec 08 09:24:40 crc kubenswrapper[4662]: I1208 09:24:40.172947 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-949qw" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.010469 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf"] Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.012101 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.014123 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.020541 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf"] Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.188235 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b262k\" (UniqueName: \"kubernetes.io/projected/bb54475d-40fd-4b72-9936-3b9de9625c8e-kube-api-access-b262k\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.188301 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.188522 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.289527 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.289928 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.290079 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b262k\" (UniqueName: \"kubernetes.io/projected/bb54475d-40fd-4b72-9936-3b9de9625c8e-kube-api-access-b262k\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.290512 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-util\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.290550 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-bundle\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.313088 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b262k\" (UniqueName: \"kubernetes.io/projected/bb54475d-40fd-4b72-9936-3b9de9625c8e-kube-api-access-b262k\") pod \"5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.327037 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:49 crc kubenswrapper[4662]: I1208 09:24:49.801618 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf"] Dec 08 09:24:49 crc kubenswrapper[4662]: W1208 09:24:49.812410 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb54475d_40fd_4b72_9936_3b9de9625c8e.slice/crio-d84502beb7fae561796f6ff4ba2610fdbb07a5f514e8878dbfb69e52b2ac535e WatchSource:0}: Error finding container d84502beb7fae561796f6ff4ba2610fdbb07a5f514e8878dbfb69e52b2ac535e: Status 404 returned error can't find the container with id d84502beb7fae561796f6ff4ba2610fdbb07a5f514e8878dbfb69e52b2ac535e Dec 08 09:24:50 crc kubenswrapper[4662]: I1208 09:24:50.659403 4662 generic.go:334] "Generic (PLEG): container finished" podID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerID="18517fb3b9dafe5ad9aef24174e486c14e984fd62260475228dc538834b75839" exitCode=0 Dec 08 09:24:50 crc kubenswrapper[4662]: I1208 09:24:50.659518 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" event={"ID":"bb54475d-40fd-4b72-9936-3b9de9625c8e","Type":"ContainerDied","Data":"18517fb3b9dafe5ad9aef24174e486c14e984fd62260475228dc538834b75839"} Dec 08 09:24:50 crc kubenswrapper[4662]: I1208 09:24:50.659706 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" event={"ID":"bb54475d-40fd-4b72-9936-3b9de9625c8e","Type":"ContainerStarted","Data":"d84502beb7fae561796f6ff4ba2610fdbb07a5f514e8878dbfb69e52b2ac535e"} Dec 08 09:24:52 crc kubenswrapper[4662]: I1208 09:24:52.672124 4662 generic.go:334] "Generic (PLEG): container finished" podID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerID="6605118b096941f40f66bced40a54f62bb8af65a92cef59bb8746c1aedac7989" exitCode=0 Dec 08 09:24:52 crc kubenswrapper[4662]: I1208 09:24:52.672203 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" event={"ID":"bb54475d-40fd-4b72-9936-3b9de9625c8e","Type":"ContainerDied","Data":"6605118b096941f40f66bced40a54f62bb8af65a92cef59bb8746c1aedac7989"} Dec 08 09:24:53 crc kubenswrapper[4662]: I1208 09:24:53.680877 4662 generic.go:334] "Generic (PLEG): container finished" podID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerID="e4cc5fa686cd5dbf32229217e548dd9b9c1df4abc1ddf6320cbef1778a4e264b" exitCode=0 Dec 08 09:24:53 crc kubenswrapper[4662]: I1208 09:24:53.680929 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" event={"ID":"bb54475d-40fd-4b72-9936-3b9de9625c8e","Type":"ContainerDied","Data":"e4cc5fa686cd5dbf32229217e548dd9b9c1df4abc1ddf6320cbef1778a4e264b"} Dec 08 09:24:54 crc kubenswrapper[4662]: I1208 09:24:54.910779 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.061886 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-util\") pod \"bb54475d-40fd-4b72-9936-3b9de9625c8e\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.061959 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-bundle\") pod \"bb54475d-40fd-4b72-9936-3b9de9625c8e\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.062009 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b262k\" (UniqueName: \"kubernetes.io/projected/bb54475d-40fd-4b72-9936-3b9de9625c8e-kube-api-access-b262k\") pod \"bb54475d-40fd-4b72-9936-3b9de9625c8e\" (UID: \"bb54475d-40fd-4b72-9936-3b9de9625c8e\") " Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.062628 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-bundle" (OuterVolumeSpecName: "bundle") pod "bb54475d-40fd-4b72-9936-3b9de9625c8e" (UID: "bb54475d-40fd-4b72-9936-3b9de9625c8e"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.067681 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb54475d-40fd-4b72-9936-3b9de9625c8e-kube-api-access-b262k" (OuterVolumeSpecName: "kube-api-access-b262k") pod "bb54475d-40fd-4b72-9936-3b9de9625c8e" (UID: "bb54475d-40fd-4b72-9936-3b9de9625c8e"). InnerVolumeSpecName "kube-api-access-b262k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.085690 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-util" (OuterVolumeSpecName: "util") pod "bb54475d-40fd-4b72-9936-3b9de9625c8e" (UID: "bb54475d-40fd-4b72-9936-3b9de9625c8e"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.163318 4662 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.163364 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b262k\" (UniqueName: \"kubernetes.io/projected/bb54475d-40fd-4b72-9936-3b9de9625c8e-kube-api-access-b262k\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.163390 4662 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bb54475d-40fd-4b72-9936-3b9de9625c8e-util\") on node \"crc\" DevicePath \"\"" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.702858 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" event={"ID":"bb54475d-40fd-4b72-9936-3b9de9625c8e","Type":"ContainerDied","Data":"d84502beb7fae561796f6ff4ba2610fdbb07a5f514e8878dbfb69e52b2ac535e"} Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.702928 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84502beb7fae561796f6ff4ba2610fdbb07a5f514e8878dbfb69e52b2ac535e" Dec 08 09:24:55 crc kubenswrapper[4662]: I1208 09:24:55.702952 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.769044 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52"] Dec 08 09:25:00 crc kubenswrapper[4662]: E1208 09:25:00.769540 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerName="pull" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.769555 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerName="pull" Dec 08 09:25:00 crc kubenswrapper[4662]: E1208 09:25:00.769571 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerName="util" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.769576 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerName="util" Dec 08 09:25:00 crc kubenswrapper[4662]: E1208 09:25:00.769584 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerName="extract" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.769589 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerName="extract" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.769679 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb54475d-40fd-4b72-9936-3b9de9625c8e" containerName="extract" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.770051 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.774714 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.775068 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-b6kkn" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.778220 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.783417 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52"] Dec 08 09:25:00 crc kubenswrapper[4662]: I1208 09:25:00.934450 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpqj\" (UniqueName: \"kubernetes.io/projected/64cd0b32-387f-4149-936a-43b7dac53247-kube-api-access-hwpqj\") pod \"nmstate-operator-5b5b58f5c8-p4d52\" (UID: \"64cd0b32-387f-4149-936a-43b7dac53247\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" Dec 08 09:25:01 crc kubenswrapper[4662]: I1208 09:25:01.036326 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwpqj\" (UniqueName: \"kubernetes.io/projected/64cd0b32-387f-4149-936a-43b7dac53247-kube-api-access-hwpqj\") pod \"nmstate-operator-5b5b58f5c8-p4d52\" (UID: \"64cd0b32-387f-4149-936a-43b7dac53247\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" Dec 08 09:25:01 crc kubenswrapper[4662]: I1208 09:25:01.069575 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwpqj\" (UniqueName: \"kubernetes.io/projected/64cd0b32-387f-4149-936a-43b7dac53247-kube-api-access-hwpqj\") pod \"nmstate-operator-5b5b58f5c8-p4d52\" (UID: \"64cd0b32-387f-4149-936a-43b7dac53247\") " pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" Dec 08 09:25:01 crc kubenswrapper[4662]: I1208 09:25:01.090734 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" Dec 08 09:25:01 crc kubenswrapper[4662]: I1208 09:25:01.322943 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52"] Dec 08 09:25:01 crc kubenswrapper[4662]: I1208 09:25:01.734916 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" event={"ID":"64cd0b32-387f-4149-936a-43b7dac53247","Type":"ContainerStarted","Data":"679335eda92289ae5688ec19cf183525e8794a3e2a3207bc37bb39a86abcd65f"} Dec 08 09:25:03 crc kubenswrapper[4662]: I1208 09:25:03.745460 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" event={"ID":"64cd0b32-387f-4149-936a-43b7dac53247","Type":"ContainerStarted","Data":"abac99a124a793768ffdd2f30c7d43ed8a29c34a75c46859e36e05a290d94212"} Dec 08 09:25:03 crc kubenswrapper[4662]: I1208 09:25:03.761815 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5b5b58f5c8-p4d52" podStartSLOduration=2.143263638 podStartE2EDuration="3.761794523s" podCreationTimestamp="2025-12-08 09:25:00 +0000 UTC" firstStartedPulling="2025-12-08 09:25:01.335173151 +0000 UTC m=+624.904201141" lastFinishedPulling="2025-12-08 09:25:02.953703996 +0000 UTC m=+626.522732026" observedRunningTime="2025-12-08 09:25:03.759383949 +0000 UTC m=+627.328411999" watchObservedRunningTime="2025-12-08 09:25:03.761794523 +0000 UTC m=+627.330822513" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.471904 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.473196 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.475477 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-mw4ft" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.482949 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.484021 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.486867 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.496390 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.520952 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-9gzt4"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.521759 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.540511 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.545680 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-ovs-socket\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.545755 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsl26\" (UniqueName: \"kubernetes.io/projected/1c33596c-571d-4ed0-ab96-408c6246dde3-kube-api-access-tsl26\") pod \"nmstate-metrics-7f946cbc9-v2z8b\" (UID: \"1c33596c-571d-4ed0-ab96-408c6246dde3\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.545783 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-nmstate-lock\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.545808 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvgk5\" (UniqueName: \"kubernetes.io/projected/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-kube-api-access-xvgk5\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.545840 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snq5p\" (UniqueName: \"kubernetes.io/projected/1adc5600-df22-4b2d-b6cf-2e044117c530-kube-api-access-snq5p\") pod \"nmstate-webhook-5f6d4c5ccb-d4mb7\" (UID: \"1adc5600-df22-4b2d-b6cf-2e044117c530\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.545866 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-dbus-socket\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.545905 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1adc5600-df22-4b2d-b6cf-2e044117c530-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-d4mb7\" (UID: \"1adc5600-df22-4b2d-b6cf-2e044117c530\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.646712 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-ovs-socket\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.647088 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-nmstate-lock\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.647114 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsl26\" (UniqueName: \"kubernetes.io/projected/1c33596c-571d-4ed0-ab96-408c6246dde3-kube-api-access-tsl26\") pod \"nmstate-metrics-7f946cbc9-v2z8b\" (UID: \"1c33596c-571d-4ed0-ab96-408c6246dde3\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.647141 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvgk5\" (UniqueName: \"kubernetes.io/projected/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-kube-api-access-xvgk5\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.647172 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snq5p\" (UniqueName: \"kubernetes.io/projected/1adc5600-df22-4b2d-b6cf-2e044117c530-kube-api-access-snq5p\") pod \"nmstate-webhook-5f6d4c5ccb-d4mb7\" (UID: \"1adc5600-df22-4b2d-b6cf-2e044117c530\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.647198 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-dbus-socket\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.647237 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1adc5600-df22-4b2d-b6cf-2e044117c530-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-d4mb7\" (UID: \"1adc5600-df22-4b2d-b6cf-2e044117c530\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:09 crc kubenswrapper[4662]: E1208 09:25:09.647347 4662 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Dec 08 09:25:09 crc kubenswrapper[4662]: E1208 09:25:09.647398 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1adc5600-df22-4b2d-b6cf-2e044117c530-tls-key-pair podName:1adc5600-df22-4b2d-b6cf-2e044117c530 nodeName:}" failed. No retries permitted until 2025-12-08 09:25:10.147379944 +0000 UTC m=+633.716407934 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/1adc5600-df22-4b2d-b6cf-2e044117c530-tls-key-pair") pod "nmstate-webhook-5f6d4c5ccb-d4mb7" (UID: "1adc5600-df22-4b2d-b6cf-2e044117c530") : secret "openshift-nmstate-webhook" not found Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.646870 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-ovs-socket\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.647445 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-nmstate-lock\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.648165 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-dbus-socket\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.652286 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.653142 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.672282 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.672554 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.672762 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-g67pj" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.699940 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.704946 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsl26\" (UniqueName: \"kubernetes.io/projected/1c33596c-571d-4ed0-ab96-408c6246dde3-kube-api-access-tsl26\") pod \"nmstate-metrics-7f946cbc9-v2z8b\" (UID: \"1c33596c-571d-4ed0-ab96-408c6246dde3\") " pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.715280 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snq5p\" (UniqueName: \"kubernetes.io/projected/1adc5600-df22-4b2d-b6cf-2e044117c530-kube-api-access-snq5p\") pod \"nmstate-webhook-5f6d4c5ccb-d4mb7\" (UID: \"1adc5600-df22-4b2d-b6cf-2e044117c530\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.717693 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvgk5\" (UniqueName: \"kubernetes.io/projected/5ba14e04-5fa6-4c63-b8a1-4138df25d0ce-kube-api-access-xvgk5\") pod \"nmstate-handler-9gzt4\" (UID: \"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce\") " pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.752183 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.752248 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.752299 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w7dj\" (UniqueName: \"kubernetes.io/projected/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-kube-api-access-9w7dj\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.812199 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.857138 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.857553 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.857613 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w7dj\" (UniqueName: \"kubernetes.io/projected/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-kube-api-access-9w7dj\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.857690 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: E1208 09:25:09.857806 4662 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Dec 08 09:25:09 crc kubenswrapper[4662]: E1208 09:25:09.857854 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-plugin-serving-cert podName:a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9 nodeName:}" failed. No retries permitted until 2025-12-08 09:25:10.357839475 +0000 UTC m=+633.926867465 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-plugin-serving-cert") pod "nmstate-console-plugin-7fbb5f6569-wk2lb" (UID: "a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9") : secret "plugin-serving-cert" not found Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.858690 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-nginx-conf\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.882114 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w7dj\" (UniqueName: \"kubernetes.io/projected/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-kube-api-access-9w7dj\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.896714 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-d997d46c7-6l8x5"] Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.897581 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:09 crc kubenswrapper[4662]: I1208 09:25:09.922868 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d997d46c7-6l8x5"] Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.062457 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cc4545-31b5-41b2-b8fc-89f415005795-console-serving-cert\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.062524 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-service-ca\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.062552 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-trusted-ca-bundle\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.062577 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnbjx\" (UniqueName: \"kubernetes.io/projected/b6cc4545-31b5-41b2-b8fc-89f415005795-kube-api-access-lnbjx\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.062618 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-console-config\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.062667 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6cc4545-31b5-41b2-b8fc-89f415005795-console-oauth-config\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.062686 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-oauth-serving-cert\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.157958 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b"] Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.163924 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-console-config\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.163971 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1adc5600-df22-4b2d-b6cf-2e044117c530-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-d4mb7\" (UID: \"1adc5600-df22-4b2d-b6cf-2e044117c530\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.163989 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6cc4545-31b5-41b2-b8fc-89f415005795-console-oauth-config\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.164007 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-oauth-serving-cert\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.164041 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cc4545-31b5-41b2-b8fc-89f415005795-console-serving-cert\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.164067 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-service-ca\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.164090 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-trusted-ca-bundle\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.164113 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnbjx\" (UniqueName: \"kubernetes.io/projected/b6cc4545-31b5-41b2-b8fc-89f415005795-kube-api-access-lnbjx\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.165021 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-console-config\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.165141 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-service-ca\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.165548 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-trusted-ca-bundle\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.165584 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b6cc4545-31b5-41b2-b8fc-89f415005795-oauth-serving-cert\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.169002 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/1adc5600-df22-4b2d-b6cf-2e044117c530-tls-key-pair\") pod \"nmstate-webhook-5f6d4c5ccb-d4mb7\" (UID: \"1adc5600-df22-4b2d-b6cf-2e044117c530\") " pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.170609 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b6cc4545-31b5-41b2-b8fc-89f415005795-console-oauth-config\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.176714 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cc4545-31b5-41b2-b8fc-89f415005795-console-serving-cert\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.180977 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnbjx\" (UniqueName: \"kubernetes.io/projected/b6cc4545-31b5-41b2-b8fc-89f415005795-kube-api-access-lnbjx\") pod \"console-d997d46c7-6l8x5\" (UID: \"b6cc4545-31b5-41b2-b8fc-89f415005795\") " pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.223576 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.366648 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.371378 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9-plugin-serving-cert\") pod \"nmstate-console-plugin-7fbb5f6569-wk2lb\" (UID: \"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9\") " pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.399920 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-d997d46c7-6l8x5"] Dec 08 09:25:10 crc kubenswrapper[4662]: W1208 09:25:10.405566 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6cc4545_31b5_41b2_b8fc_89f415005795.slice/crio-f9accba399bc717e09259334fb936d0c26bfcef60acd55928068cef685f66694 WatchSource:0}: Error finding container f9accba399bc717e09259334fb936d0c26bfcef60acd55928068cef685f66694: Status 404 returned error can't find the container with id f9accba399bc717e09259334fb936d0c26bfcef60acd55928068cef685f66694 Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.427614 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.589891 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7"] Dec 08 09:25:10 crc kubenswrapper[4662]: W1208 09:25:10.600207 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1adc5600_df22_4b2d_b6cf_2e044117c530.slice/crio-0626f1363ec84cfce9bda894b4783a87ad2596235b30368854b0c1b16bd5f3cc WatchSource:0}: Error finding container 0626f1363ec84cfce9bda894b4783a87ad2596235b30368854b0c1b16bd5f3cc: Status 404 returned error can't find the container with id 0626f1363ec84cfce9bda894b4783a87ad2596235b30368854b0c1b16bd5f3cc Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.649828 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.794337 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9gzt4" event={"ID":"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce","Type":"ContainerStarted","Data":"c6d4c57581ccec5b9d6afb770a883920db583fb1d820a172626b586dfdcdc1b3"} Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.797300 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d997d46c7-6l8x5" event={"ID":"b6cc4545-31b5-41b2-b8fc-89f415005795","Type":"ContainerStarted","Data":"8cd9894d5591585b189fcdccc176772804e2336c3b96d84bf3e64d58fbd26392"} Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.797353 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-d997d46c7-6l8x5" event={"ID":"b6cc4545-31b5-41b2-b8fc-89f415005795","Type":"ContainerStarted","Data":"f9accba399bc717e09259334fb936d0c26bfcef60acd55928068cef685f66694"} Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.799323 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" event={"ID":"1adc5600-df22-4b2d-b6cf-2e044117c530","Type":"ContainerStarted","Data":"0626f1363ec84cfce9bda894b4783a87ad2596235b30368854b0c1b16bd5f3cc"} Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.804416 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" event={"ID":"1c33596c-571d-4ed0-ab96-408c6246dde3","Type":"ContainerStarted","Data":"18f46ee89e3705e71d5ef827fe0c2ceda9fba9dfd9ec1ad78ac2ae30bb18769b"} Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.812640 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-d997d46c7-6l8x5" podStartSLOduration=1.8126226060000001 podStartE2EDuration="1.812622606s" podCreationTimestamp="2025-12-08 09:25:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:25:10.81165877 +0000 UTC m=+634.380686780" watchObservedRunningTime="2025-12-08 09:25:10.812622606 +0000 UTC m=+634.381650596" Dec 08 09:25:10 crc kubenswrapper[4662]: I1208 09:25:10.853432 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb"] Dec 08 09:25:11 crc kubenswrapper[4662]: I1208 09:25:11.811692 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" event={"ID":"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9","Type":"ContainerStarted","Data":"b893bc6907f5211244b0f138f727034e7f27669984e0ffad4b78952e29b8eddd"} Dec 08 09:25:12 crc kubenswrapper[4662]: I1208 09:25:12.819705 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-9gzt4" event={"ID":"5ba14e04-5fa6-4c63-b8a1-4138df25d0ce","Type":"ContainerStarted","Data":"7697295e36de751d5b8913c5fc432bb0f2b6245f520d53f2b67063e33ce34385"} Dec 08 09:25:12 crc kubenswrapper[4662]: I1208 09:25:12.820358 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:12 crc kubenswrapper[4662]: I1208 09:25:12.822185 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" event={"ID":"1adc5600-df22-4b2d-b6cf-2e044117c530","Type":"ContainerStarted","Data":"bc7690e7ab650728d8725e94ea6c4f86b2b43295790303ab7d4219b93a327d0a"} Dec 08 09:25:12 crc kubenswrapper[4662]: I1208 09:25:12.822536 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:12 crc kubenswrapper[4662]: I1208 09:25:12.825988 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" event={"ID":"1c33596c-571d-4ed0-ab96-408c6246dde3","Type":"ContainerStarted","Data":"a8dc08800197fb1c4cb6bb2aa867ebe380b0be3c36c5c857333a0982fee7005b"} Dec 08 09:25:12 crc kubenswrapper[4662]: I1208 09:25:12.841592 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-9gzt4" podStartSLOduration=1.254862862 podStartE2EDuration="3.841572796s" podCreationTimestamp="2025-12-08 09:25:09 +0000 UTC" firstStartedPulling="2025-12-08 09:25:09.927794027 +0000 UTC m=+633.496822027" lastFinishedPulling="2025-12-08 09:25:12.514503951 +0000 UTC m=+636.083531961" observedRunningTime="2025-12-08 09:25:12.839052379 +0000 UTC m=+636.408080369" watchObservedRunningTime="2025-12-08 09:25:12.841572796 +0000 UTC m=+636.410600786" Dec 08 09:25:12 crc kubenswrapper[4662]: I1208 09:25:12.855865 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" podStartSLOduration=1.91758134 podStartE2EDuration="3.855846286s" podCreationTimestamp="2025-12-08 09:25:09 +0000 UTC" firstStartedPulling="2025-12-08 09:25:10.602072562 +0000 UTC m=+634.171100552" lastFinishedPulling="2025-12-08 09:25:12.540337498 +0000 UTC m=+636.109365498" observedRunningTime="2025-12-08 09:25:12.855111086 +0000 UTC m=+636.424139066" watchObservedRunningTime="2025-12-08 09:25:12.855846286 +0000 UTC m=+636.424874276" Dec 08 09:25:13 crc kubenswrapper[4662]: I1208 09:25:13.834286 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" event={"ID":"a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9","Type":"ContainerStarted","Data":"6a05e9cfa99747808b1a162471fd46851cf4e7eef144c375fed449da0d4a24d6"} Dec 08 09:25:13 crc kubenswrapper[4662]: I1208 09:25:13.850060 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7fbb5f6569-wk2lb" podStartSLOduration=2.032339144 podStartE2EDuration="4.850043215s" podCreationTimestamp="2025-12-08 09:25:09 +0000 UTC" firstStartedPulling="2025-12-08 09:25:10.868906494 +0000 UTC m=+634.437934484" lastFinishedPulling="2025-12-08 09:25:13.686610565 +0000 UTC m=+637.255638555" observedRunningTime="2025-12-08 09:25:13.849908791 +0000 UTC m=+637.418936781" watchObservedRunningTime="2025-12-08 09:25:13.850043215 +0000 UTC m=+637.419071195" Dec 08 09:25:15 crc kubenswrapper[4662]: I1208 09:25:15.846237 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" event={"ID":"1c33596c-571d-4ed0-ab96-408c6246dde3","Type":"ContainerStarted","Data":"3dc08c1c3e734411bc1e4407d98724157d564bc5aeff6aa8a65fe066d4c9d53c"} Dec 08 09:25:15 crc kubenswrapper[4662]: I1208 09:25:15.873239 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-7f946cbc9-v2z8b" podStartSLOduration=2.176168422 podStartE2EDuration="6.873219581s" podCreationTimestamp="2025-12-08 09:25:09 +0000 UTC" firstStartedPulling="2025-12-08 09:25:10.18304703 +0000 UTC m=+633.752075020" lastFinishedPulling="2025-12-08 09:25:14.880098189 +0000 UTC m=+638.449126179" observedRunningTime="2025-12-08 09:25:15.870346704 +0000 UTC m=+639.439374714" watchObservedRunningTime="2025-12-08 09:25:15.873219581 +0000 UTC m=+639.442247581" Dec 08 09:25:19 crc kubenswrapper[4662]: I1208 09:25:19.882789 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-9gzt4" Dec 08 09:25:20 crc kubenswrapper[4662]: I1208 09:25:20.224232 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:20 crc kubenswrapper[4662]: I1208 09:25:20.224287 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:20 crc kubenswrapper[4662]: I1208 09:25:20.231120 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:20 crc kubenswrapper[4662]: I1208 09:25:20.888705 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-d997d46c7-6l8x5" Dec 08 09:25:20 crc kubenswrapper[4662]: I1208 09:25:20.966456 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9lp67"] Dec 08 09:25:30 crc kubenswrapper[4662]: I1208 09:25:30.433830 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f6d4c5ccb-d4mb7" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.037962 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc"] Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.039684 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.041562 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.048447 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc"] Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.193789 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.193839 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sfrf\" (UniqueName: \"kubernetes.io/projected/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-kube-api-access-2sfrf\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.193884 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.295610 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.295819 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.295865 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sfrf\" (UniqueName: \"kubernetes.io/projected/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-kube-api-access-2sfrf\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.296791 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-util\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.297184 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-bundle\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.318805 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sfrf\" (UniqueName: \"kubernetes.io/projected/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-kube-api-access-2sfrf\") pod \"af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.417934 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:43 crc kubenswrapper[4662]: I1208 09:25:43.787988 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc"] Dec 08 09:25:44 crc kubenswrapper[4662]: I1208 09:25:44.035673 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" event={"ID":"1de6e66b-f63f-4f47-b38d-52a2ff32ce38","Type":"ContainerStarted","Data":"5e63281c28470072fb56f95eb6aab8889c733783e05d819cf942aba409f26739"} Dec 08 09:25:44 crc kubenswrapper[4662]: I1208 09:25:44.036482 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" event={"ID":"1de6e66b-f63f-4f47-b38d-52a2ff32ce38","Type":"ContainerStarted","Data":"27d7c3442fa277a4e4d6320f52639e230e0b291719096927a3e2bdb887baa1fc"} Dec 08 09:25:45 crc kubenswrapper[4662]: I1208 09:25:45.044552 4662 generic.go:334] "Generic (PLEG): container finished" podID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerID="5e63281c28470072fb56f95eb6aab8889c733783e05d819cf942aba409f26739" exitCode=0 Dec 08 09:25:45 crc kubenswrapper[4662]: I1208 09:25:45.044624 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" event={"ID":"1de6e66b-f63f-4f47-b38d-52a2ff32ce38","Type":"ContainerDied","Data":"5e63281c28470072fb56f95eb6aab8889c733783e05d819cf942aba409f26739"} Dec 08 09:25:46 crc kubenswrapper[4662]: I1208 09:25:46.041619 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-9lp67" podUID="a0cf72db-464e-4859-bec6-0e3d456e10aa" containerName="console" containerID="cri-o://2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66" gracePeriod=15 Dec 08 09:25:46 crc kubenswrapper[4662]: I1208 09:25:46.883440 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9lp67_a0cf72db-464e-4859-bec6-0e3d456e10aa/console/0.log" Dec 08 09:25:46 crc kubenswrapper[4662]: I1208 09:25:46.883783 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.043429 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-service-ca\") pod \"a0cf72db-464e-4859-bec6-0e3d456e10aa\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.043531 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-trusted-ca-bundle\") pod \"a0cf72db-464e-4859-bec6-0e3d456e10aa\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.043575 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-oauth-serving-cert\") pod \"a0cf72db-464e-4859-bec6-0e3d456e10aa\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.043616 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-oauth-config\") pod \"a0cf72db-464e-4859-bec6-0e3d456e10aa\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.043666 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-serving-cert\") pod \"a0cf72db-464e-4859-bec6-0e3d456e10aa\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.043724 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzcl5\" (UniqueName: \"kubernetes.io/projected/a0cf72db-464e-4859-bec6-0e3d456e10aa-kube-api-access-qzcl5\") pod \"a0cf72db-464e-4859-bec6-0e3d456e10aa\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.044658 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-config\") pod \"a0cf72db-464e-4859-bec6-0e3d456e10aa\" (UID: \"a0cf72db-464e-4859-bec6-0e3d456e10aa\") " Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.044433 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a0cf72db-464e-4859-bec6-0e3d456e10aa" (UID: "a0cf72db-464e-4859-bec6-0e3d456e10aa"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.044510 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a0cf72db-464e-4859-bec6-0e3d456e10aa" (UID: "a0cf72db-464e-4859-bec6-0e3d456e10aa"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.045016 4662 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.045042 4662 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.045157 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-config" (OuterVolumeSpecName: "console-config") pod "a0cf72db-464e-4859-bec6-0e3d456e10aa" (UID: "a0cf72db-464e-4859-bec6-0e3d456e10aa"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.045401 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-service-ca" (OuterVolumeSpecName: "service-ca") pod "a0cf72db-464e-4859-bec6-0e3d456e10aa" (UID: "a0cf72db-464e-4859-bec6-0e3d456e10aa"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.048555 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a0cf72db-464e-4859-bec6-0e3d456e10aa" (UID: "a0cf72db-464e-4859-bec6-0e3d456e10aa"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.053042 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a0cf72db-464e-4859-bec6-0e3d456e10aa" (UID: "a0cf72db-464e-4859-bec6-0e3d456e10aa"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.053873 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0cf72db-464e-4859-bec6-0e3d456e10aa-kube-api-access-qzcl5" (OuterVolumeSpecName: "kube-api-access-qzcl5") pod "a0cf72db-464e-4859-bec6-0e3d456e10aa" (UID: "a0cf72db-464e-4859-bec6-0e3d456e10aa"). InnerVolumeSpecName "kube-api-access-qzcl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.057944 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-9lp67_a0cf72db-464e-4859-bec6-0e3d456e10aa/console/0.log" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.057980 4662 generic.go:334] "Generic (PLEG): container finished" podID="a0cf72db-464e-4859-bec6-0e3d456e10aa" containerID="2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66" exitCode=2 Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.058022 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lp67" event={"ID":"a0cf72db-464e-4859-bec6-0e3d456e10aa","Type":"ContainerDied","Data":"2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66"} Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.058045 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-9lp67" event={"ID":"a0cf72db-464e-4859-bec6-0e3d456e10aa","Type":"ContainerDied","Data":"e5cb5965f76e45df9c3bd0aa0f71dffe1876f2505b752e348eaa0645c831573e"} Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.058061 4662 scope.go:117] "RemoveContainer" containerID="2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.058146 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-9lp67" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.063071 4662 generic.go:334] "Generic (PLEG): container finished" podID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerID="3d0d2f8c911f5bbdcd5e2ab3d538bcdb8281763cc393ea76429dae7777e85fc0" exitCode=0 Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.063125 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" event={"ID":"1de6e66b-f63f-4f47-b38d-52a2ff32ce38","Type":"ContainerDied","Data":"3d0d2f8c911f5bbdcd5e2ab3d538bcdb8281763cc393ea76429dae7777e85fc0"} Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.097661 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-9lp67"] Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.100254 4662 scope.go:117] "RemoveContainer" containerID="2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66" Dec 08 09:25:47 crc kubenswrapper[4662]: E1208 09:25:47.100772 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66\": container with ID starting with 2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66 not found: ID does not exist" containerID="2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.100835 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66"} err="failed to get container status \"2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66\": rpc error: code = NotFound desc = could not find container \"2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66\": container with ID starting with 2f1ea69351f304999fd60880b02fd3126b20b3b3578c45bc985243a1c69f4e66 not found: ID does not exist" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.101755 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-9lp67"] Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.145400 4662 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.145429 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzcl5\" (UniqueName: \"kubernetes.io/projected/a0cf72db-464e-4859-bec6-0e3d456e10aa-kube-api-access-qzcl5\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.145441 4662 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.145451 4662 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a0cf72db-464e-4859-bec6-0e3d456e10aa-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:47 crc kubenswrapper[4662]: I1208 09:25:47.145464 4662 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a0cf72db-464e-4859-bec6-0e3d456e10aa-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:48 crc kubenswrapper[4662]: I1208 09:25:48.072392 4662 generic.go:334] "Generic (PLEG): container finished" podID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerID="d3709c66236a19daadc6c97c09a097d7199546b09102dde40429ef4211235df8" exitCode=0 Dec 08 09:25:48 crc kubenswrapper[4662]: I1208 09:25:48.072514 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" event={"ID":"1de6e66b-f63f-4f47-b38d-52a2ff32ce38","Type":"ContainerDied","Data":"d3709c66236a19daadc6c97c09a097d7199546b09102dde40429ef4211235df8"} Dec 08 09:25:48 crc kubenswrapper[4662]: I1208 09:25:48.718308 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0cf72db-464e-4859-bec6-0e3d456e10aa" path="/var/lib/kubelet/pods/a0cf72db-464e-4859-bec6-0e3d456e10aa/volumes" Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.295380 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.373179 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sfrf\" (UniqueName: \"kubernetes.io/projected/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-kube-api-access-2sfrf\") pod \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.373269 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-bundle\") pod \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.373309 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-util\") pod \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\" (UID: \"1de6e66b-f63f-4f47-b38d-52a2ff32ce38\") " Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.374216 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-bundle" (OuterVolumeSpecName: "bundle") pod "1de6e66b-f63f-4f47-b38d-52a2ff32ce38" (UID: "1de6e66b-f63f-4f47-b38d-52a2ff32ce38"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.378497 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-kube-api-access-2sfrf" (OuterVolumeSpecName: "kube-api-access-2sfrf") pod "1de6e66b-f63f-4f47-b38d-52a2ff32ce38" (UID: "1de6e66b-f63f-4f47-b38d-52a2ff32ce38"). InnerVolumeSpecName "kube-api-access-2sfrf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.387364 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-util" (OuterVolumeSpecName: "util") pod "1de6e66b-f63f-4f47-b38d-52a2ff32ce38" (UID: "1de6e66b-f63f-4f47-b38d-52a2ff32ce38"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.474500 4662 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.474550 4662 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-util\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:49 crc kubenswrapper[4662]: I1208 09:25:49.474562 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sfrf\" (UniqueName: \"kubernetes.io/projected/1de6e66b-f63f-4f47-b38d-52a2ff32ce38-kube-api-access-2sfrf\") on node \"crc\" DevicePath \"\"" Dec 08 09:25:50 crc kubenswrapper[4662]: I1208 09:25:50.087255 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" event={"ID":"1de6e66b-f63f-4f47-b38d-52a2ff32ce38","Type":"ContainerDied","Data":"27d7c3442fa277a4e4d6320f52639e230e0b291719096927a3e2bdb887baa1fc"} Dec 08 09:25:50 crc kubenswrapper[4662]: I1208 09:25:50.087524 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27d7c3442fa277a4e4d6320f52639e230e0b291719096927a3e2bdb887baa1fc" Dec 08 09:25:50 crc kubenswrapper[4662]: I1208 09:25:50.087811 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.075461 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb"] Dec 08 09:25:58 crc kubenswrapper[4662]: E1208 09:25:58.076211 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerName="extract" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.076224 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerName="extract" Dec 08 09:25:58 crc kubenswrapper[4662]: E1208 09:25:58.076235 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerName="pull" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.076241 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerName="pull" Dec 08 09:25:58 crc kubenswrapper[4662]: E1208 09:25:58.076256 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerName="util" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.076262 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerName="util" Dec 08 09:25:58 crc kubenswrapper[4662]: E1208 09:25:58.076275 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0cf72db-464e-4859-bec6-0e3d456e10aa" containerName="console" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.076281 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0cf72db-464e-4859-bec6-0e3d456e10aa" containerName="console" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.076399 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0cf72db-464e-4859-bec6-0e3d456e10aa" containerName="console" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.076418 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="1de6e66b-f63f-4f47-b38d-52a2ff32ce38" containerName="extract" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.076817 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.079306 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-lsftf" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.079344 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.079344 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.079303 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.080613 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.091112 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb"] Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.171498 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05682e28-5a70-4569-ac16-8cc0f3f17c39-webhook-cert\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.171580 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05682e28-5a70-4569-ac16-8cc0f3f17c39-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.171717 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7pf7\" (UniqueName: \"kubernetes.io/projected/05682e28-5a70-4569-ac16-8cc0f3f17c39-kube-api-access-f7pf7\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.272955 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05682e28-5a70-4569-ac16-8cc0f3f17c39-webhook-cert\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.273017 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05682e28-5a70-4569-ac16-8cc0f3f17c39-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.273045 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7pf7\" (UniqueName: \"kubernetes.io/projected/05682e28-5a70-4569-ac16-8cc0f3f17c39-kube-api-access-f7pf7\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.280354 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05682e28-5a70-4569-ac16-8cc0f3f17c39-webhook-cert\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.283583 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05682e28-5a70-4569-ac16-8cc0f3f17c39-apiservice-cert\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.297383 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7pf7\" (UniqueName: \"kubernetes.io/projected/05682e28-5a70-4569-ac16-8cc0f3f17c39-kube-api-access-f7pf7\") pod \"metallb-operator-controller-manager-6bc95b94b-xrzfb\" (UID: \"05682e28-5a70-4569-ac16-8cc0f3f17c39\") " pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.395701 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.408616 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm"] Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.409325 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.412154 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.412218 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-vzq92" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.412607 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.424364 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm"] Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.576655 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd084a40-bf8b-4764-ba4f-c587c5132b76-webhook-cert\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.576713 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mld\" (UniqueName: \"kubernetes.io/projected/cd084a40-bf8b-4764-ba4f-c587c5132b76-kube-api-access-44mld\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.576975 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd084a40-bf8b-4764-ba4f-c587c5132b76-apiservice-cert\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.687359 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd084a40-bf8b-4764-ba4f-c587c5132b76-apiservice-cert\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.687700 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd084a40-bf8b-4764-ba4f-c587c5132b76-webhook-cert\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.687734 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44mld\" (UniqueName: \"kubernetes.io/projected/cd084a40-bf8b-4764-ba4f-c587c5132b76-kube-api-access-44mld\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.715471 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd084a40-bf8b-4764-ba4f-c587c5132b76-webhook-cert\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.719926 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd084a40-bf8b-4764-ba4f-c587c5132b76-apiservice-cert\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.720370 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44mld\" (UniqueName: \"kubernetes.io/projected/cd084a40-bf8b-4764-ba4f-c587c5132b76-kube-api-access-44mld\") pod \"metallb-operator-webhook-server-77b87b85c5-226hm\" (UID: \"cd084a40-bf8b-4764-ba4f-c587c5132b76\") " pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.799041 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:25:58 crc kubenswrapper[4662]: I1208 09:25:58.897471 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb"] Dec 08 09:25:58 crc kubenswrapper[4662]: W1208 09:25:58.902565 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05682e28_5a70_4569_ac16_8cc0f3f17c39.slice/crio-a49cef935855e06ace8d24e8e3e1c111150366ff5f7857d4f7edf5b44dbeae47 WatchSource:0}: Error finding container a49cef935855e06ace8d24e8e3e1c111150366ff5f7857d4f7edf5b44dbeae47: Status 404 returned error can't find the container with id a49cef935855e06ace8d24e8e3e1c111150366ff5f7857d4f7edf5b44dbeae47 Dec 08 09:25:59 crc kubenswrapper[4662]: I1208 09:25:59.098530 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm"] Dec 08 09:25:59 crc kubenswrapper[4662]: W1208 09:25:59.103279 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd084a40_bf8b_4764_ba4f_c587c5132b76.slice/crio-6dd364c8fa7fb9e93a54fb618112d7ba6e9ff8a2a87f04420e9a7219fd831599 WatchSource:0}: Error finding container 6dd364c8fa7fb9e93a54fb618112d7ba6e9ff8a2a87f04420e9a7219fd831599: Status 404 returned error can't find the container with id 6dd364c8fa7fb9e93a54fb618112d7ba6e9ff8a2a87f04420e9a7219fd831599 Dec 08 09:25:59 crc kubenswrapper[4662]: I1208 09:25:59.139943 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" event={"ID":"cd084a40-bf8b-4764-ba4f-c587c5132b76","Type":"ContainerStarted","Data":"6dd364c8fa7fb9e93a54fb618112d7ba6e9ff8a2a87f04420e9a7219fd831599"} Dec 08 09:25:59 crc kubenswrapper[4662]: I1208 09:25:59.140887 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" event={"ID":"05682e28-5a70-4569-ac16-8cc0f3f17c39","Type":"ContainerStarted","Data":"a49cef935855e06ace8d24e8e3e1c111150366ff5f7857d4f7edf5b44dbeae47"} Dec 08 09:26:05 crc kubenswrapper[4662]: I1208 09:26:05.175417 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" event={"ID":"cd084a40-bf8b-4764-ba4f-c587c5132b76","Type":"ContainerStarted","Data":"cf003eaff0339d488658110de9fe737944e2a79672160e98013f383c30c42144"} Dec 08 09:26:05 crc kubenswrapper[4662]: I1208 09:26:05.176869 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:26:05 crc kubenswrapper[4662]: I1208 09:26:05.176900 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" event={"ID":"05682e28-5a70-4569-ac16-8cc0f3f17c39","Type":"ContainerStarted","Data":"8081e1efce4952e9eccac9ff733ecd45c52cc193c46b4452ba45a602ccdddd5d"} Dec 08 09:26:05 crc kubenswrapper[4662]: I1208 09:26:05.177028 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:26:05 crc kubenswrapper[4662]: I1208 09:26:05.197510 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" podStartSLOduration=1.920754633 podStartE2EDuration="7.197493525s" podCreationTimestamp="2025-12-08 09:25:58 +0000 UTC" firstStartedPulling="2025-12-08 09:25:59.106462598 +0000 UTC m=+682.675490588" lastFinishedPulling="2025-12-08 09:26:04.38320147 +0000 UTC m=+687.952229480" observedRunningTime="2025-12-08 09:26:05.194914775 +0000 UTC m=+688.763942805" watchObservedRunningTime="2025-12-08 09:26:05.197493525 +0000 UTC m=+688.766521525" Dec 08 09:26:05 crc kubenswrapper[4662]: I1208 09:26:05.220955 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" podStartSLOduration=1.769348261 podStartE2EDuration="7.220929409s" podCreationTimestamp="2025-12-08 09:25:58 +0000 UTC" firstStartedPulling="2025-12-08 09:25:58.916981376 +0000 UTC m=+682.486009366" lastFinishedPulling="2025-12-08 09:26:04.368562514 +0000 UTC m=+687.937590514" observedRunningTime="2025-12-08 09:26:05.214482895 +0000 UTC m=+688.783510905" watchObservedRunningTime="2025-12-08 09:26:05.220929409 +0000 UTC m=+688.789957399" Dec 08 09:26:18 crc kubenswrapper[4662]: I1208 09:26:18.817049 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-77b87b85c5-226hm" Dec 08 09:26:32 crc kubenswrapper[4662]: I1208 09:26:32.611350 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:26:32 crc kubenswrapper[4662]: I1208 09:26:32.612706 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:26:38 crc kubenswrapper[4662]: I1208 09:26:38.399562 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6bc95b94b-xrzfb" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.123546 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-d84ss"] Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.126307 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.128783 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt"] Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.129698 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.131140 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.131438 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.131513 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.131585 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lbfcf" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.143518 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt"] Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.173690 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-conf\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.173984 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-startup\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.174138 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-sockets\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.174263 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj8qd\" (UniqueName: \"kubernetes.io/projected/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-kube-api-access-jj8qd\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.174393 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-reloader\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.174520 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.174621 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/855c2f04-3def-48ad-b73c-535485327343-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-llfmt\" (UID: \"855c2f04-3def-48ad-b73c-535485327343\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.174721 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics-certs\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.174840 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66mff\" (UniqueName: \"kubernetes.io/projected/855c2f04-3def-48ad-b73c-535485327343-kube-api-access-66mff\") pod \"frr-k8s-webhook-server-7fcb986d4-llfmt\" (UID: \"855c2f04-3def-48ad-b73c-535485327343\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276454 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-conf\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276517 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-startup\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276564 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-sockets\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276595 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jj8qd\" (UniqueName: \"kubernetes.io/projected/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-kube-api-access-jj8qd\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276618 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-reloader\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276641 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276668 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/855c2f04-3def-48ad-b73c-535485327343-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-llfmt\" (UID: \"855c2f04-3def-48ad-b73c-535485327343\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276697 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics-certs\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.276724 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66mff\" (UniqueName: \"kubernetes.io/projected/855c2f04-3def-48ad-b73c-535485327343-kube-api-access-66mff\") pod \"frr-k8s-webhook-server-7fcb986d4-llfmt\" (UID: \"855c2f04-3def-48ad-b73c-535485327343\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.277332 4662 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.277423 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics-certs podName:77b8b61e-ff7a-424e-bbd8-9f20ce485c51 nodeName:}" failed. No retries permitted until 2025-12-08 09:26:39.777396132 +0000 UTC m=+723.346424122 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics-certs") pod "frr-k8s-d84ss" (UID: "77b8b61e-ff7a-424e-bbd8-9f20ce485c51") : secret "frr-k8s-certs-secret" not found Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.277472 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-conf\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.277525 4662 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.277589 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/855c2f04-3def-48ad-b73c-535485327343-cert podName:855c2f04-3def-48ad-b73c-535485327343 nodeName:}" failed. No retries permitted until 2025-12-08 09:26:39.777568727 +0000 UTC m=+723.346596797 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/855c2f04-3def-48ad-b73c-535485327343-cert") pod "frr-k8s-webhook-server-7fcb986d4-llfmt" (UID: "855c2f04-3def-48ad-b73c-535485327343") : secret "frr-k8s-webhook-server-cert" not found Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.277921 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-reloader\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.278053 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.277972 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-sockets\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.278233 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-frr-startup\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.309734 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66mff\" (UniqueName: \"kubernetes.io/projected/855c2f04-3def-48ad-b73c-535485327343-kube-api-access-66mff\") pod \"frr-k8s-webhook-server-7fcb986d4-llfmt\" (UID: \"855c2f04-3def-48ad-b73c-535485327343\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.322190 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-4q6hz"] Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.323008 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.323681 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jj8qd\" (UniqueName: \"kubernetes.io/projected/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-kube-api-access-jj8qd\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.330090 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-f8648f98b-hqc7f"] Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.330914 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.331356 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.331560 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.332705 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.341707 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.341760 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-xgd6l" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.351017 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-hqc7f"] Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.377390 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c8hh\" (UniqueName: \"kubernetes.io/projected/5cead70a-4652-4695-87aa-ef3d3ecb419d-kube-api-access-9c8hh\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.377605 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-cert\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.377794 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lzswp\" (UniqueName: \"kubernetes.io/projected/52a4fbf9-995c-4926-8d43-21adb4a9455d-kube-api-access-lzswp\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.377851 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-metrics-certs\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.377936 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.377966 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/52a4fbf9-995c-4926-8d43-21adb4a9455d-metallb-excludel2\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.377994 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-metrics-certs\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.478999 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-metrics-certs\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.479099 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c8hh\" (UniqueName: \"kubernetes.io/projected/5cead70a-4652-4695-87aa-ef3d3ecb419d-kube-api-access-9c8hh\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.479119 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-cert\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.479145 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lzswp\" (UniqueName: \"kubernetes.io/projected/52a4fbf9-995c-4926-8d43-21adb4a9455d-kube-api-access-lzswp\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.479165 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-metrics-certs\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.479184 4662 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.479301 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-metrics-certs podName:52a4fbf9-995c-4926-8d43-21adb4a9455d nodeName:}" failed. No retries permitted until 2025-12-08 09:26:39.979271469 +0000 UTC m=+723.548299469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-metrics-certs") pod "speaker-4q6hz" (UID: "52a4fbf9-995c-4926-8d43-21adb4a9455d") : secret "speaker-certs-secret" not found Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.479202 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.479307 4662 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.479375 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/52a4fbf9-995c-4926-8d43-21adb4a9455d-metallb-excludel2\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.479424 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist podName:52a4fbf9-995c-4926-8d43-21adb4a9455d nodeName:}" failed. No retries permitted until 2025-12-08 09:26:39.979409463 +0000 UTC m=+723.548437453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist") pod "speaker-4q6hz" (UID: "52a4fbf9-995c-4926-8d43-21adb4a9455d") : secret "metallb-memberlist" not found Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.479307 4662 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.479454 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-metrics-certs podName:5cead70a-4652-4695-87aa-ef3d3ecb419d nodeName:}" failed. No retries permitted until 2025-12-08 09:26:39.979449354 +0000 UTC m=+723.548477344 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-metrics-certs") pod "controller-f8648f98b-hqc7f" (UID: "5cead70a-4652-4695-87aa-ef3d3ecb419d") : secret "controller-certs-secret" not found Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.480067 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/52a4fbf9-995c-4926-8d43-21adb4a9455d-metallb-excludel2\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.484445 4662 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.494415 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c8hh\" (UniqueName: \"kubernetes.io/projected/5cead70a-4652-4695-87aa-ef3d3ecb419d-kube-api-access-9c8hh\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.498281 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-cert\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.505414 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lzswp\" (UniqueName: \"kubernetes.io/projected/52a4fbf9-995c-4926-8d43-21adb4a9455d-kube-api-access-lzswp\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.782691 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/855c2f04-3def-48ad-b73c-535485327343-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-llfmt\" (UID: \"855c2f04-3def-48ad-b73c-535485327343\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.782793 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics-certs\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.786575 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/77b8b61e-ff7a-424e-bbd8-9f20ce485c51-metrics-certs\") pod \"frr-k8s-d84ss\" (UID: \"77b8b61e-ff7a-424e-bbd8-9f20ce485c51\") " pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.786860 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/855c2f04-3def-48ad-b73c-535485327343-cert\") pod \"frr-k8s-webhook-server-7fcb986d4-llfmt\" (UID: \"855c2f04-3def-48ad-b73c-535485327343\") " pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.996470 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-metrics-certs\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.996811 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: I1208 09:26:39.996840 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-metrics-certs\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.997037 4662 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Dec 08 09:26:39 crc kubenswrapper[4662]: E1208 09:26:39.997113 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist podName:52a4fbf9-995c-4926-8d43-21adb4a9455d nodeName:}" failed. No retries permitted until 2025-12-08 09:26:40.997094309 +0000 UTC m=+724.566122299 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist") pod "speaker-4q6hz" (UID: "52a4fbf9-995c-4926-8d43-21adb4a9455d") : secret "metallb-memberlist" not found Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.000640 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5cead70a-4652-4695-87aa-ef3d3ecb419d-metrics-certs\") pod \"controller-f8648f98b-hqc7f\" (UID: \"5cead70a-4652-4695-87aa-ef3d3ecb419d\") " pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.000959 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-metrics-certs\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.047102 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.066177 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.276320 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.378948 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerStarted","Data":"ca2ec84b05e17b1bce264612ea43661c3e9bd53a4684fcc0d25af9454f1243ac"} Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.484359 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt"] Dec 08 09:26:40 crc kubenswrapper[4662]: W1208 09:26:40.487885 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod855c2f04_3def_48ad_b73c_535485327343.slice/crio-269ba5672495775a2eda7354cf621b87af3b36fe202ae61a56e82fdb5223e9cc WatchSource:0}: Error finding container 269ba5672495775a2eda7354cf621b87af3b36fe202ae61a56e82fdb5223e9cc: Status 404 returned error can't find the container with id 269ba5672495775a2eda7354cf621b87af3b36fe202ae61a56e82fdb5223e9cc Dec 08 09:26:40 crc kubenswrapper[4662]: I1208 09:26:40.527030 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-f8648f98b-hqc7f"] Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.009979 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.019442 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/52a4fbf9-995c-4926-8d43-21adb4a9455d-memberlist\") pod \"speaker-4q6hz\" (UID: \"52a4fbf9-995c-4926-8d43-21adb4a9455d\") " pod="metallb-system/speaker-4q6hz" Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.163141 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-4q6hz" Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.384070 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4q6hz" event={"ID":"52a4fbf9-995c-4926-8d43-21adb4a9455d","Type":"ContainerStarted","Data":"c6fcc179afec299dc9a57eb2dd3469c55fd2d1e250b2f53108edfeafc5b56f9e"} Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.386124 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-hqc7f" event={"ID":"5cead70a-4652-4695-87aa-ef3d3ecb419d","Type":"ContainerStarted","Data":"7ad46b39dfdcfbcf486b1193fa4586f65b8e5d9f654257f4b16735366578e614"} Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.386154 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-hqc7f" event={"ID":"5cead70a-4652-4695-87aa-ef3d3ecb419d","Type":"ContainerStarted","Data":"d86ce308656dfbc84b587df4a2fe9944e1d8ffc8a5048e9648c8244a07ca9681"} Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.386164 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-f8648f98b-hqc7f" event={"ID":"5cead70a-4652-4695-87aa-ef3d3ecb419d","Type":"ContainerStarted","Data":"bc75753d6527382617dc13a042ab22590d47e20d13507ede4591152499237e24"} Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.386307 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.387652 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" event={"ID":"855c2f04-3def-48ad-b73c-535485327343","Type":"ContainerStarted","Data":"269ba5672495775a2eda7354cf621b87af3b36fe202ae61a56e82fdb5223e9cc"} Dec 08 09:26:41 crc kubenswrapper[4662]: I1208 09:26:41.402980 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-f8648f98b-hqc7f" podStartSLOduration=2.402964529 podStartE2EDuration="2.402964529s" podCreationTimestamp="2025-12-08 09:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:26:41.401800958 +0000 UTC m=+724.970828948" watchObservedRunningTime="2025-12-08 09:26:41.402964529 +0000 UTC m=+724.971992519" Dec 08 09:26:42 crc kubenswrapper[4662]: I1208 09:26:42.396242 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4q6hz" event={"ID":"52a4fbf9-995c-4926-8d43-21adb4a9455d","Type":"ContainerStarted","Data":"4f6c1335c72d00458856e7bd00f862c8c5f56a717b4a840761b344b34ea36e9b"} Dec 08 09:26:42 crc kubenswrapper[4662]: I1208 09:26:42.396507 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-4q6hz" event={"ID":"52a4fbf9-995c-4926-8d43-21adb4a9455d","Type":"ContainerStarted","Data":"ed2451250811f96f7445fe1cb6fd61c32df7821ff61ae1397e8c68935ee22a12"} Dec 08 09:26:42 crc kubenswrapper[4662]: I1208 09:26:42.396605 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-4q6hz" Dec 08 09:26:42 crc kubenswrapper[4662]: I1208 09:26:42.419633 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-4q6hz" podStartSLOduration=3.4196180050000002 podStartE2EDuration="3.419618005s" podCreationTimestamp="2025-12-08 09:26:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:26:42.418487875 +0000 UTC m=+725.987515865" watchObservedRunningTime="2025-12-08 09:26:42.419618005 +0000 UTC m=+725.988645995" Dec 08 09:26:48 crc kubenswrapper[4662]: I1208 09:26:48.442984 4662 generic.go:334] "Generic (PLEG): container finished" podID="77b8b61e-ff7a-424e-bbd8-9f20ce485c51" containerID="902cc3bf2ea25c0826e9a28a330eef54b39e0223605cee119f541727d83a4b08" exitCode=0 Dec 08 09:26:48 crc kubenswrapper[4662]: I1208 09:26:48.443271 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerDied","Data":"902cc3bf2ea25c0826e9a28a330eef54b39e0223605cee119f541727d83a4b08"} Dec 08 09:26:48 crc kubenswrapper[4662]: I1208 09:26:48.450090 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" event={"ID":"855c2f04-3def-48ad-b73c-535485327343","Type":"ContainerStarted","Data":"a3b331a76ab4ffb45e9a9ef200ab153ddda3d1b9b37dfd16bcb00a5ce8ce0e98"} Dec 08 09:26:48 crc kubenswrapper[4662]: I1208 09:26:48.450402 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:26:48 crc kubenswrapper[4662]: I1208 09:26:48.501837 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" podStartSLOduration=1.887188474 podStartE2EDuration="9.501805173s" podCreationTimestamp="2025-12-08 09:26:39 +0000 UTC" firstStartedPulling="2025-12-08 09:26:40.502615667 +0000 UTC m=+724.071643657" lastFinishedPulling="2025-12-08 09:26:48.117232366 +0000 UTC m=+731.686260356" observedRunningTime="2025-12-08 09:26:48.496818128 +0000 UTC m=+732.065846128" watchObservedRunningTime="2025-12-08 09:26:48.501805173 +0000 UTC m=+732.070833163" Dec 08 09:26:49 crc kubenswrapper[4662]: I1208 09:26:49.465391 4662 generic.go:334] "Generic (PLEG): container finished" podID="77b8b61e-ff7a-424e-bbd8-9f20ce485c51" containerID="0b0b558d6a40927a6a5a1753c2d0e023c50cf293fe963c291ccae0ae0178d46f" exitCode=0 Dec 08 09:26:49 crc kubenswrapper[4662]: I1208 09:26:49.465496 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerDied","Data":"0b0b558d6a40927a6a5a1753c2d0e023c50cf293fe963c291ccae0ae0178d46f"} Dec 08 09:26:50 crc kubenswrapper[4662]: I1208 09:26:50.281548 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-f8648f98b-hqc7f" Dec 08 09:26:50 crc kubenswrapper[4662]: I1208 09:26:50.478500 4662 generic.go:334] "Generic (PLEG): container finished" podID="77b8b61e-ff7a-424e-bbd8-9f20ce485c51" containerID="900734a4458777063dd0da02a260debb9de1ccbeb62ac24a0f58cb634c14d68c" exitCode=0 Dec 08 09:26:50 crc kubenswrapper[4662]: I1208 09:26:50.478552 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerDied","Data":"900734a4458777063dd0da02a260debb9de1ccbeb62ac24a0f58cb634c14d68c"} Dec 08 09:26:51 crc kubenswrapper[4662]: I1208 09:26:51.166236 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-4q6hz" Dec 08 09:26:51 crc kubenswrapper[4662]: I1208 09:26:51.487303 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerStarted","Data":"7a5a92486c6a82de3200250c5075a14512c0ccb93cb75097d419fb529a0483e6"} Dec 08 09:26:51 crc kubenswrapper[4662]: I1208 09:26:51.487362 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerStarted","Data":"d5424930b86d9c86a5c63cf8cfb2086ffa7a22efc2ff93c7709974af004a1e08"} Dec 08 09:26:51 crc kubenswrapper[4662]: I1208 09:26:51.487372 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerStarted","Data":"9270fc42985a78c464d78b1c2abb6b6b974f9c233871645d1b9308e98c4e14b8"} Dec 08 09:26:51 crc kubenswrapper[4662]: I1208 09:26:51.487382 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerStarted","Data":"17f3c4cdf6672f1353c4b59ac05b94bffc20a8493c631c8593fa4578bb0c779a"} Dec 08 09:26:51 crc kubenswrapper[4662]: I1208 09:26:51.487390 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerStarted","Data":"08e9543fcb8b0bfeab76d69e0392fafd2fed8d5b5bbd294965a2399e6941b0f9"} Dec 08 09:26:52 crc kubenswrapper[4662]: I1208 09:26:52.497604 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d84ss" event={"ID":"77b8b61e-ff7a-424e-bbd8-9f20ce485c51","Type":"ContainerStarted","Data":"d6c2777740568df25773c958baa3c50da8c0fdcbe06618af4a0a07954dca18a8"} Dec 08 09:26:52 crc kubenswrapper[4662]: I1208 09:26:52.497783 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.018139 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-d84ss" podStartSLOduration=7.260489397 podStartE2EDuration="15.018117752s" podCreationTimestamp="2025-12-08 09:26:39 +0000 UTC" firstStartedPulling="2025-12-08 09:26:40.34183593 +0000 UTC m=+723.910863920" lastFinishedPulling="2025-12-08 09:26:48.099464285 +0000 UTC m=+731.668492275" observedRunningTime="2025-12-08 09:26:52.526797073 +0000 UTC m=+736.095825073" watchObservedRunningTime="2025-12-08 09:26:54.018117752 +0000 UTC m=+737.587145762" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.020994 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-v5jgs"] Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.022953 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v5jgs" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.026824 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.026850 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-bhhcf" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.026865 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.051916 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v5jgs"] Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.089390 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9gts\" (UniqueName: \"kubernetes.io/projected/6d966d53-f10f-463f-bd47-09a859b9c935-kube-api-access-n9gts\") pod \"openstack-operator-index-v5jgs\" (UID: \"6d966d53-f10f-463f-bd47-09a859b9c935\") " pod="openstack-operators/openstack-operator-index-v5jgs" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.190513 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9gts\" (UniqueName: \"kubernetes.io/projected/6d966d53-f10f-463f-bd47-09a859b9c935-kube-api-access-n9gts\") pod \"openstack-operator-index-v5jgs\" (UID: \"6d966d53-f10f-463f-bd47-09a859b9c935\") " pod="openstack-operators/openstack-operator-index-v5jgs" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.232758 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9gts\" (UniqueName: \"kubernetes.io/projected/6d966d53-f10f-463f-bd47-09a859b9c935-kube-api-access-n9gts\") pod \"openstack-operator-index-v5jgs\" (UID: \"6d966d53-f10f-463f-bd47-09a859b9c935\") " pod="openstack-operators/openstack-operator-index-v5jgs" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.345142 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v5jgs" Dec 08 09:26:54 crc kubenswrapper[4662]: I1208 09:26:54.600928 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-v5jgs"] Dec 08 09:26:55 crc kubenswrapper[4662]: I1208 09:26:55.048704 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:55 crc kubenswrapper[4662]: I1208 09:26:55.101383 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-d84ss" Dec 08 09:26:55 crc kubenswrapper[4662]: I1208 09:26:55.543360 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v5jgs" event={"ID":"6d966d53-f10f-463f-bd47-09a859b9c935","Type":"ContainerStarted","Data":"3b8b0cb548f905be5191888a8def007d927f54d2f021afa3120b0b916fac7ff7"} Dec 08 09:26:57 crc kubenswrapper[4662]: I1208 09:26:57.413239 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v5jgs"] Dec 08 09:26:58 crc kubenswrapper[4662]: I1208 09:26:58.001420 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-l2kgc"] Dec 08 09:26:58 crc kubenswrapper[4662]: I1208 09:26:58.002493 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:26:58 crc kubenswrapper[4662]: I1208 09:26:58.009130 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l2kgc"] Dec 08 09:26:58 crc kubenswrapper[4662]: I1208 09:26:58.038232 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbptz\" (UniqueName: \"kubernetes.io/projected/4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5-kube-api-access-fbptz\") pod \"openstack-operator-index-l2kgc\" (UID: \"4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5\") " pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:26:58 crc kubenswrapper[4662]: I1208 09:26:58.139529 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbptz\" (UniqueName: \"kubernetes.io/projected/4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5-kube-api-access-fbptz\") pod \"openstack-operator-index-l2kgc\" (UID: \"4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5\") " pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:26:58 crc kubenswrapper[4662]: I1208 09:26:58.163618 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbptz\" (UniqueName: \"kubernetes.io/projected/4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5-kube-api-access-fbptz\") pod \"openstack-operator-index-l2kgc\" (UID: \"4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5\") " pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:26:58 crc kubenswrapper[4662]: I1208 09:26:58.328555 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:26:59 crc kubenswrapper[4662]: I1208 09:26:59.360580 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-l2kgc"] Dec 08 09:26:59 crc kubenswrapper[4662]: I1208 09:26:59.581057 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l2kgc" event={"ID":"4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5","Type":"ContainerStarted","Data":"a8422eb1d85a8ed360449d184873e1e4ceefe86b8335f9c0c968bbe93f2a42d8"} Dec 08 09:26:59 crc kubenswrapper[4662]: I1208 09:26:59.583218 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v5jgs" event={"ID":"6d966d53-f10f-463f-bd47-09a859b9c935","Type":"ContainerStarted","Data":"03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e"} Dec 08 09:26:59 crc kubenswrapper[4662]: I1208 09:26:59.583382 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-v5jgs" podUID="6d966d53-f10f-463f-bd47-09a859b9c935" containerName="registry-server" containerID="cri-o://03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e" gracePeriod=2 Dec 08 09:26:59 crc kubenswrapper[4662]: I1208 09:26:59.606729 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-v5jgs" podStartSLOduration=1.025784945 podStartE2EDuration="5.606705145s" podCreationTimestamp="2025-12-08 09:26:54 +0000 UTC" firstStartedPulling="2025-12-08 09:26:54.618471983 +0000 UTC m=+738.187499983" lastFinishedPulling="2025-12-08 09:26:59.199392193 +0000 UTC m=+742.768420183" observedRunningTime="2025-12-08 09:26:59.601422133 +0000 UTC m=+743.170450133" watchObservedRunningTime="2025-12-08 09:26:59.606705145 +0000 UTC m=+743.175733135" Dec 08 09:26:59 crc kubenswrapper[4662]: I1208 09:26:59.940947 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v5jgs" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.051073 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-d84ss" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.066806 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n9gts\" (UniqueName: \"kubernetes.io/projected/6d966d53-f10f-463f-bd47-09a859b9c935-kube-api-access-n9gts\") pod \"6d966d53-f10f-463f-bd47-09a859b9c935\" (UID: \"6d966d53-f10f-463f-bd47-09a859b9c935\") " Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.078523 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d966d53-f10f-463f-bd47-09a859b9c935-kube-api-access-n9gts" (OuterVolumeSpecName: "kube-api-access-n9gts") pod "6d966d53-f10f-463f-bd47-09a859b9c935" (UID: "6d966d53-f10f-463f-bd47-09a859b9c935"). InnerVolumeSpecName "kube-api-access-n9gts". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.078555 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7fcb986d4-llfmt" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.169212 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n9gts\" (UniqueName: \"kubernetes.io/projected/6d966d53-f10f-463f-bd47-09a859b9c935-kube-api-access-n9gts\") on node \"crc\" DevicePath \"\"" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.591663 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-l2kgc" event={"ID":"4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5","Type":"ContainerStarted","Data":"83e3d31c27c3352f395443d6675b6e5005b8a4e78914e9262b4085b24f341783"} Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.595074 4662 generic.go:334] "Generic (PLEG): container finished" podID="6d966d53-f10f-463f-bd47-09a859b9c935" containerID="03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e" exitCode=0 Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.595120 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v5jgs" event={"ID":"6d966d53-f10f-463f-bd47-09a859b9c935","Type":"ContainerDied","Data":"03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e"} Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.595140 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-v5jgs" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.595160 4662 scope.go:117] "RemoveContainer" containerID="03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.595147 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-v5jgs" event={"ID":"6d966d53-f10f-463f-bd47-09a859b9c935","Type":"ContainerDied","Data":"3b8b0cb548f905be5191888a8def007d927f54d2f021afa3120b0b916fac7ff7"} Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.617778 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-l2kgc" podStartSLOduration=3.528305932 podStartE2EDuration="3.6177228s" podCreationTimestamp="2025-12-08 09:26:57 +0000 UTC" firstStartedPulling="2025-12-08 09:26:59.37829445 +0000 UTC m=+742.947322440" lastFinishedPulling="2025-12-08 09:26:59.467711298 +0000 UTC m=+743.036739308" observedRunningTime="2025-12-08 09:27:00.609449516 +0000 UTC m=+744.178477516" watchObservedRunningTime="2025-12-08 09:27:00.6177228 +0000 UTC m=+744.186750830" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.620348 4662 scope.go:117] "RemoveContainer" containerID="03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e" Dec 08 09:27:00 crc kubenswrapper[4662]: E1208 09:27:00.621062 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e\": container with ID starting with 03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e not found: ID does not exist" containerID="03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.621140 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e"} err="failed to get container status \"03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e\": rpc error: code = NotFound desc = could not find container \"03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e\": container with ID starting with 03649ba1fadf80a49abf89d64ff9fd25b9ba301c4043f68788bf7b03d4906c4e not found: ID does not exist" Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.637454 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-v5jgs"] Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.641070 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-v5jgs"] Dec 08 09:27:00 crc kubenswrapper[4662]: I1208 09:27:00.706338 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d966d53-f10f-463f-bd47-09a859b9c935" path="/var/lib/kubelet/pods/6d966d53-f10f-463f-bd47-09a859b9c935/volumes" Dec 08 09:27:02 crc kubenswrapper[4662]: I1208 09:27:02.615561 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:27:02 crc kubenswrapper[4662]: I1208 09:27:02.615994 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:27:08 crc kubenswrapper[4662]: I1208 09:27:08.328635 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:27:08 crc kubenswrapper[4662]: I1208 09:27:08.329160 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:27:08 crc kubenswrapper[4662]: I1208 09:27:08.369911 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:27:08 crc kubenswrapper[4662]: I1208 09:27:08.688655 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-l2kgc" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.637920 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf"] Dec 08 09:27:09 crc kubenswrapper[4662]: E1208 09:27:09.638505 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d966d53-f10f-463f-bd47-09a859b9c935" containerName="registry-server" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.638521 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d966d53-f10f-463f-bd47-09a859b9c935" containerName="registry-server" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.638661 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d966d53-f10f-463f-bd47-09a859b9c935" containerName="registry-server" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.639668 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.644685 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-2lvzd" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.662620 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf"] Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.695070 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tz697\" (UniqueName: \"kubernetes.io/projected/359a63b7-e36a-4741-a91f-545218de47a5-kube-api-access-tz697\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.695192 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-util\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.695267 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-bundle\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.796882 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-util\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.797022 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-bundle\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.797068 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tz697\" (UniqueName: \"kubernetes.io/projected/359a63b7-e36a-4741-a91f-545218de47a5-kube-api-access-tz697\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.797441 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-util\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.797680 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-bundle\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.831994 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tz697\" (UniqueName: \"kubernetes.io/projected/359a63b7-e36a-4741-a91f-545218de47a5-kube-api-access-tz697\") pod \"dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:09 crc kubenswrapper[4662]: I1208 09:27:09.964404 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:10 crc kubenswrapper[4662]: I1208 09:27:10.192033 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf"] Dec 08 09:27:10 crc kubenswrapper[4662]: I1208 09:27:10.678754 4662 generic.go:334] "Generic (PLEG): container finished" podID="359a63b7-e36a-4741-a91f-545218de47a5" containerID="b77ad4b9996b63d3c4e7f8d31f0dbabbb26eab5aa306830a5de259a98eeee072" exitCode=0 Dec 08 09:27:10 crc kubenswrapper[4662]: I1208 09:27:10.678825 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" event={"ID":"359a63b7-e36a-4741-a91f-545218de47a5","Type":"ContainerDied","Data":"b77ad4b9996b63d3c4e7f8d31f0dbabbb26eab5aa306830a5de259a98eeee072"} Dec 08 09:27:10 crc kubenswrapper[4662]: I1208 09:27:10.679022 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" event={"ID":"359a63b7-e36a-4741-a91f-545218de47a5","Type":"ContainerStarted","Data":"9db85f35fe45bc49cfc229088cf1d3a7e977c35109a4173a9e0286915697ba35"} Dec 08 09:27:11 crc kubenswrapper[4662]: I1208 09:27:11.686250 4662 generic.go:334] "Generic (PLEG): container finished" podID="359a63b7-e36a-4741-a91f-545218de47a5" containerID="06d9d45bc28d327b3d003385d5fc555b7f63c521674c7d5b47fdcfdb800b7111" exitCode=0 Dec 08 09:27:11 crc kubenswrapper[4662]: I1208 09:27:11.686573 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" event={"ID":"359a63b7-e36a-4741-a91f-545218de47a5","Type":"ContainerDied","Data":"06d9d45bc28d327b3d003385d5fc555b7f63c521674c7d5b47fdcfdb800b7111"} Dec 08 09:27:12 crc kubenswrapper[4662]: I1208 09:27:12.695324 4662 generic.go:334] "Generic (PLEG): container finished" podID="359a63b7-e36a-4741-a91f-545218de47a5" containerID="655921bddcfcd0f27e34dca032c5e0841a392b08703131b53925615c1ff5fb78" exitCode=0 Dec 08 09:27:12 crc kubenswrapper[4662]: I1208 09:27:12.695404 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" event={"ID":"359a63b7-e36a-4741-a91f-545218de47a5","Type":"ContainerDied","Data":"655921bddcfcd0f27e34dca032c5e0841a392b08703131b53925615c1ff5fb78"} Dec 08 09:27:13 crc kubenswrapper[4662]: I1208 09:27:13.939453 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:13 crc kubenswrapper[4662]: I1208 09:27:13.955014 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-bundle\") pod \"359a63b7-e36a-4741-a91f-545218de47a5\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " Dec 08 09:27:13 crc kubenswrapper[4662]: I1208 09:27:13.955170 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-util\") pod \"359a63b7-e36a-4741-a91f-545218de47a5\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " Dec 08 09:27:13 crc kubenswrapper[4662]: I1208 09:27:13.955254 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tz697\" (UniqueName: \"kubernetes.io/projected/359a63b7-e36a-4741-a91f-545218de47a5-kube-api-access-tz697\") pod \"359a63b7-e36a-4741-a91f-545218de47a5\" (UID: \"359a63b7-e36a-4741-a91f-545218de47a5\") " Dec 08 09:27:13 crc kubenswrapper[4662]: I1208 09:27:13.958889 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-bundle" (OuterVolumeSpecName: "bundle") pod "359a63b7-e36a-4741-a91f-545218de47a5" (UID: "359a63b7-e36a-4741-a91f-545218de47a5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:27:13 crc kubenswrapper[4662]: I1208 09:27:13.965865 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359a63b7-e36a-4741-a91f-545218de47a5-kube-api-access-tz697" (OuterVolumeSpecName: "kube-api-access-tz697") pod "359a63b7-e36a-4741-a91f-545218de47a5" (UID: "359a63b7-e36a-4741-a91f-545218de47a5"). InnerVolumeSpecName "kube-api-access-tz697". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:27:13 crc kubenswrapper[4662]: I1208 09:27:13.974239 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-util" (OuterVolumeSpecName: "util") pod "359a63b7-e36a-4741-a91f-545218de47a5" (UID: "359a63b7-e36a-4741-a91f-545218de47a5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:27:14 crc kubenswrapper[4662]: I1208 09:27:14.060339 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tz697\" (UniqueName: \"kubernetes.io/projected/359a63b7-e36a-4741-a91f-545218de47a5-kube-api-access-tz697\") on node \"crc\" DevicePath \"\"" Dec 08 09:27:14 crc kubenswrapper[4662]: I1208 09:27:14.060381 4662 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:27:14 crc kubenswrapper[4662]: I1208 09:27:14.060393 4662 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/359a63b7-e36a-4741-a91f-545218de47a5-util\") on node \"crc\" DevicePath \"\"" Dec 08 09:27:14 crc kubenswrapper[4662]: I1208 09:27:14.709890 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" event={"ID":"359a63b7-e36a-4741-a91f-545218de47a5","Type":"ContainerDied","Data":"9db85f35fe45bc49cfc229088cf1d3a7e977c35109a4173a9e0286915697ba35"} Dec 08 09:27:14 crc kubenswrapper[4662]: I1208 09:27:14.710314 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9db85f35fe45bc49cfc229088cf1d3a7e977c35109a4173a9e0286915697ba35" Dec 08 09:27:14 crc kubenswrapper[4662]: I1208 09:27:14.710119 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf" Dec 08 09:27:19 crc kubenswrapper[4662]: I1208 09:27:19.205132 4662 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.248672 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw"] Dec 08 09:27:22 crc kubenswrapper[4662]: E1208 09:27:22.249452 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359a63b7-e36a-4741-a91f-545218de47a5" containerName="pull" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.249465 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="359a63b7-e36a-4741-a91f-545218de47a5" containerName="pull" Dec 08 09:27:22 crc kubenswrapper[4662]: E1208 09:27:22.249475 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359a63b7-e36a-4741-a91f-545218de47a5" containerName="extract" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.249484 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="359a63b7-e36a-4741-a91f-545218de47a5" containerName="extract" Dec 08 09:27:22 crc kubenswrapper[4662]: E1208 09:27:22.249511 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359a63b7-e36a-4741-a91f-545218de47a5" containerName="util" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.249519 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="359a63b7-e36a-4741-a91f-545218de47a5" containerName="util" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.249654 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="359a63b7-e36a-4741-a91f-545218de47a5" containerName="extract" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.250158 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.256223 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-2wvxv" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.266925 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77fqc\" (UniqueName: \"kubernetes.io/projected/0e25126d-5ab8-4691-aa80-352149bc813b-kube-api-access-77fqc\") pod \"openstack-operator-controller-operator-5974cc6b8d-bqsmw\" (UID: \"0e25126d-5ab8-4691-aa80-352149bc813b\") " pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.288902 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw"] Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.368220 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77fqc\" (UniqueName: \"kubernetes.io/projected/0e25126d-5ab8-4691-aa80-352149bc813b-kube-api-access-77fqc\") pod \"openstack-operator-controller-operator-5974cc6b8d-bqsmw\" (UID: \"0e25126d-5ab8-4691-aa80-352149bc813b\") " pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.391140 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77fqc\" (UniqueName: \"kubernetes.io/projected/0e25126d-5ab8-4691-aa80-352149bc813b-kube-api-access-77fqc\") pod \"openstack-operator-controller-operator-5974cc6b8d-bqsmw\" (UID: \"0e25126d-5ab8-4691-aa80-352149bc813b\") " pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.567240 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" Dec 08 09:27:22 crc kubenswrapper[4662]: I1208 09:27:22.842523 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw"] Dec 08 09:27:23 crc kubenswrapper[4662]: I1208 09:27:23.765990 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" event={"ID":"0e25126d-5ab8-4691-aa80-352149bc813b","Type":"ContainerStarted","Data":"66d1132e7f7c6b99f95fd6f8ce1341ef1a6d05294f83956aa7bff60c2b8bd28c"} Dec 08 09:27:28 crc kubenswrapper[4662]: I1208 09:27:28.812429 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" event={"ID":"0e25126d-5ab8-4691-aa80-352149bc813b","Type":"ContainerStarted","Data":"f80ac4f0891d577a5ad05f566170b9a31f7d703869f24d5fc59d3b525cfc2358"} Dec 08 09:27:28 crc kubenswrapper[4662]: I1208 09:27:28.813052 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" Dec 08 09:27:28 crc kubenswrapper[4662]: I1208 09:27:28.853772 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" podStartSLOduration=1.888691584 podStartE2EDuration="6.85373401s" podCreationTimestamp="2025-12-08 09:27:22 +0000 UTC" firstStartedPulling="2025-12-08 09:27:22.832141599 +0000 UTC m=+766.401169589" lastFinishedPulling="2025-12-08 09:27:27.797184025 +0000 UTC m=+771.366212015" observedRunningTime="2025-12-08 09:27:28.838992511 +0000 UTC m=+772.408020501" watchObservedRunningTime="2025-12-08 09:27:28.85373401 +0000 UTC m=+772.422762000" Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.570543 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-5974cc6b8d-bqsmw" Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.611578 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.611844 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.611972 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.612559 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d950f79d0061a93dd2f9e3d3caab4b8f10f8ead0de736eba822a73ae528aea9e"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.612711 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://d950f79d0061a93dd2f9e3d3caab4b8f10f8ead0de736eba822a73ae528aea9e" gracePeriod=600 Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.842239 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="d950f79d0061a93dd2f9e3d3caab4b8f10f8ead0de736eba822a73ae528aea9e" exitCode=0 Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.842349 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"d950f79d0061a93dd2f9e3d3caab4b8f10f8ead0de736eba822a73ae528aea9e"} Dec 08 09:27:32 crc kubenswrapper[4662]: I1208 09:27:32.842933 4662 scope.go:117] "RemoveContainer" containerID="57a0d204dde3b59ba48b93d9451be527618c216af11a616391fcdba29c29b462" Dec 08 09:27:33 crc kubenswrapper[4662]: I1208 09:27:33.850423 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"d04de701f63b5c2e4111b66668ec4560be524ad9596aef41adf5fe2ab05b3e40"} Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.342079 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.343765 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.345319 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-9rnpq" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.351499 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.352695 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.354211 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-dvjfd" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.355512 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.420566 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djfs\" (UniqueName: \"kubernetes.io/projected/b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c-kube-api-access-5djfs\") pod \"cinder-operator-controller-manager-6c677c69b-l9m5c\" (UID: \"b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.420630 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb5pv\" (UniqueName: \"kubernetes.io/projected/022626c4-d3b3-4c80-884c-6ae24361955a-kube-api-access-sb5pv\") pod \"barbican-operator-controller-manager-7d9dfd778-pnmbz\" (UID: \"022626c4-d3b3-4c80-884c-6ae24361955a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.426813 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.427969 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.432175 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-mhc2z" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.432714 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.433948 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.444606 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-rlp8c" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.459135 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.480073 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.491218 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.492444 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.495836 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-6zh9l" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.521307 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.528829 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mr4j\" (UniqueName: \"kubernetes.io/projected/e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe-kube-api-access-4mr4j\") pod \"glance-operator-controller-manager-5697bb5779-hrlb9\" (UID: \"e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.528880 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fb5m\" (UniqueName: \"kubernetes.io/projected/dda05715-875a-41d4-9ee6-c81406a965a9-kube-api-access-6fb5m\") pod \"designate-operator-controller-manager-697fb699cf-wr5zg\" (UID: \"dda05715-875a-41d4-9ee6-c81406a965a9\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.528941 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqv24\" (UniqueName: \"kubernetes.io/projected/0fcb859f-b723-4629-902c-68696b4b8995-kube-api-access-sqv24\") pod \"heat-operator-controller-manager-5f64f6f8bb-zsdxn\" (UID: \"0fcb859f-b723-4629-902c-68696b4b8995\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.528996 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djfs\" (UniqueName: \"kubernetes.io/projected/b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c-kube-api-access-5djfs\") pod \"cinder-operator-controller-manager-6c677c69b-l9m5c\" (UID: \"b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.529049 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sb5pv\" (UniqueName: \"kubernetes.io/projected/022626c4-d3b3-4c80-884c-6ae24361955a-kube-api-access-sb5pv\") pod \"barbican-operator-controller-manager-7d9dfd778-pnmbz\" (UID: \"022626c4-d3b3-4c80-884c-6ae24361955a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.529377 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.551364 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.552339 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.558793 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.559034 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-89p9m" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.559419 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.560553 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.564253 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-2fhjj" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.585161 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djfs\" (UniqueName: \"kubernetes.io/projected/b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c-kube-api-access-5djfs\") pod \"cinder-operator-controller-manager-6c677c69b-l9m5c\" (UID: \"b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c\") " pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.602526 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sb5pv\" (UniqueName: \"kubernetes.io/projected/022626c4-d3b3-4c80-884c-6ae24361955a-kube-api-access-sb5pv\") pod \"barbican-operator-controller-manager-7d9dfd778-pnmbz\" (UID: \"022626c4-d3b3-4c80-884c-6ae24361955a\") " pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.602607 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-85425"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.603819 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.606949 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-kjkr5" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.624915 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.630364 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mr4j\" (UniqueName: \"kubernetes.io/projected/e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe-kube-api-access-4mr4j\") pod \"glance-operator-controller-manager-5697bb5779-hrlb9\" (UID: \"e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.630421 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fb5m\" (UniqueName: \"kubernetes.io/projected/dda05715-875a-41d4-9ee6-c81406a965a9-kube-api-access-6fb5m\") pod \"designate-operator-controller-manager-697fb699cf-wr5zg\" (UID: \"dda05715-875a-41d4-9ee6-c81406a965a9\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.630478 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b47w8\" (UniqueName: \"kubernetes.io/projected/3882e308-ba7b-48c8-94f0-354b3926c925-kube-api-access-b47w8\") pod \"horizon-operator-controller-manager-68c6d99b8f-4kwdx\" (UID: \"3882e308-ba7b-48c8-94f0-354b3926c925\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.630514 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqv24\" (UniqueName: \"kubernetes.io/projected/0fcb859f-b723-4629-902c-68696b4b8995-kube-api-access-sqv24\") pod \"heat-operator-controller-manager-5f64f6f8bb-zsdxn\" (UID: \"0fcb859f-b723-4629-902c-68696b4b8995\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.630559 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.630601 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhh48\" (UniqueName: \"kubernetes.io/projected/5a541aca-2d5d-432d-a375-d639af4927ee-kube-api-access-hhh48\") pod \"ironic-operator-controller-manager-967d97867-85425\" (UID: \"5a541aca-2d5d-432d-a375-d639af4927ee\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.630634 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r4jw\" (UniqueName: \"kubernetes.io/projected/a51352cf-c6f2-40cc-9b72-035737c28e0e-kube-api-access-7r4jw\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.642109 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.667141 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.670103 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-85425"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.682438 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.682435 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqv24\" (UniqueName: \"kubernetes.io/projected/0fcb859f-b723-4629-902c-68696b4b8995-kube-api-access-sqv24\") pod \"heat-operator-controller-manager-5f64f6f8bb-zsdxn\" (UID: \"0fcb859f-b723-4629-902c-68696b4b8995\") " pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.688344 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.689423 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.705102 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-lbjlb" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.705819 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mr4j\" (UniqueName: \"kubernetes.io/projected/e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe-kube-api-access-4mr4j\") pod \"glance-operator-controller-manager-5697bb5779-hrlb9\" (UID: \"e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe\") " pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.706675 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fb5m\" (UniqueName: \"kubernetes.io/projected/dda05715-875a-41d4-9ee6-c81406a965a9-kube-api-access-6fb5m\") pod \"designate-operator-controller-manager-697fb699cf-wr5zg\" (UID: \"dda05715-875a-41d4-9ee6-c81406a965a9\") " pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.731897 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhh48\" (UniqueName: \"kubernetes.io/projected/5a541aca-2d5d-432d-a375-d639af4927ee-kube-api-access-hhh48\") pod \"ironic-operator-controller-manager-967d97867-85425\" (UID: \"5a541aca-2d5d-432d-a375-d639af4927ee\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.731955 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r4jw\" (UniqueName: \"kubernetes.io/projected/a51352cf-c6f2-40cc-9b72-035737c28e0e-kube-api-access-7r4jw\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.732028 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b47w8\" (UniqueName: \"kubernetes.io/projected/3882e308-ba7b-48c8-94f0-354b3926c925-kube-api-access-b47w8\") pod \"horizon-operator-controller-manager-68c6d99b8f-4kwdx\" (UID: \"3882e308-ba7b-48c8-94f0-354b3926c925\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.732082 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:52 crc kubenswrapper[4662]: E1208 09:27:52.732220 4662 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:52 crc kubenswrapper[4662]: E1208 09:27:52.732281 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert podName:a51352cf-c6f2-40cc-9b72-035737c28e0e nodeName:}" failed. No retries permitted until 2025-12-08 09:27:53.232262667 +0000 UTC m=+796.801290657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert") pod "infra-operator-controller-manager-78d48bff9d-wgcnt" (UID: "a51352cf-c6f2-40cc-9b72-035737c28e0e") : secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.751818 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.764243 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhh48\" (UniqueName: \"kubernetes.io/projected/5a541aca-2d5d-432d-a375-d639af4927ee-kube-api-access-hhh48\") pod \"ironic-operator-controller-manager-967d97867-85425\" (UID: \"5a541aca-2d5d-432d-a375-d639af4927ee\") " pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.773138 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.773703 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.801994 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.803300 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.808844 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-v62kd" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.818991 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r4jw\" (UniqueName: \"kubernetes.io/projected/a51352cf-c6f2-40cc-9b72-035737c28e0e-kube-api-access-7r4jw\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.823848 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b47w8\" (UniqueName: \"kubernetes.io/projected/3882e308-ba7b-48c8-94f0-354b3926c925-kube-api-access-b47w8\") pod \"horizon-operator-controller-manager-68c6d99b8f-4kwdx\" (UID: \"3882e308-ba7b-48c8-94f0-354b3926c925\") " pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.828935 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.830000 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.831904 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-tsbns" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.834126 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67jnm\" (UniqueName: \"kubernetes.io/projected/41089106-3be5-42f3-9c98-76ddb4e0a32c-kube-api-access-67jnm\") pod \"manila-operator-controller-manager-5b5fd79c9c-jgbb9\" (UID: \"41089106-3be5-42f3-9c98-76ddb4e0a32c\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.834211 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2g6g\" (UniqueName: \"kubernetes.io/projected/2c718d76-ffa3-479f-972d-451437ca9b8e-kube-api-access-s2g6g\") pod \"mariadb-operator-controller-manager-79c8c4686c-hhz55\" (UID: \"2c718d76-ffa3-479f-972d-451437ca9b8e\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.834297 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t4cc\" (UniqueName: \"kubernetes.io/projected/60c3c9e4-042b-445f-96fe-7d4583ae29ee-kube-api-access-4t4cc\") pod \"keystone-operator-controller-manager-7765d96ddf-8zz9f\" (UID: \"60c3c9e4-042b-445f-96fe-7d4583ae29ee\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.836332 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.859484 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.860495 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.861829 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-nd8p6" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.876897 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.884800 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.934848 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.935867 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.937202 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t4cc\" (UniqueName: \"kubernetes.io/projected/60c3c9e4-042b-445f-96fe-7d4583ae29ee-kube-api-access-4t4cc\") pod \"keystone-operator-controller-manager-7765d96ddf-8zz9f\" (UID: \"60c3c9e4-042b-445f-96fe-7d4583ae29ee\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.937433 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67jnm\" (UniqueName: \"kubernetes.io/projected/41089106-3be5-42f3-9c98-76ddb4e0a32c-kube-api-access-67jnm\") pod \"manila-operator-controller-manager-5b5fd79c9c-jgbb9\" (UID: \"41089106-3be5-42f3-9c98-76ddb4e0a32c\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.937464 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2g6g\" (UniqueName: \"kubernetes.io/projected/2c718d76-ffa3-479f-972d-451437ca9b8e-kube-api-access-s2g6g\") pod \"mariadb-operator-controller-manager-79c8c4686c-hhz55\" (UID: \"2c718d76-ffa3-479f-972d-451437ca9b8e\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.942445 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.949592 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-vsh2b" Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.957226 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9"] Dec 08 09:27:52 crc kubenswrapper[4662]: I1208 09:27:52.961456 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.007358 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67jnm\" (UniqueName: \"kubernetes.io/projected/41089106-3be5-42f3-9c98-76ddb4e0a32c-kube-api-access-67jnm\") pod \"manila-operator-controller-manager-5b5fd79c9c-jgbb9\" (UID: \"41089106-3be5-42f3-9c98-76ddb4e0a32c\") " pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.244601 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t4cc\" (UniqueName: \"kubernetes.io/projected/60c3c9e4-042b-445f-96fe-7d4583ae29ee-kube-api-access-4t4cc\") pod \"keystone-operator-controller-manager-7765d96ddf-8zz9f\" (UID: \"60c3c9e4-042b-445f-96fe-7d4583ae29ee\") " pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.244901 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2g6g\" (UniqueName: \"kubernetes.io/projected/2c718d76-ffa3-479f-972d-451437ca9b8e-kube-api-access-s2g6g\") pod \"mariadb-operator-controller-manager-79c8c4686c-hhz55\" (UID: \"2c718d76-ffa3-479f-972d-451437ca9b8e\") " pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.283150 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp5d6\" (UniqueName: \"kubernetes.io/projected/00caf547-24eb-4e92-9294-900fbf53f068-kube-api-access-bp5d6\") pod \"nova-operator-controller-manager-697bc559fc-qtdrq\" (UID: \"00caf547-24eb-4e92-9294-900fbf53f068\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.283267 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6tqb\" (UniqueName: \"kubernetes.io/projected/edef4f76-66e3-4431-8f3e-15b1be7dc525-kube-api-access-g6tqb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-k7nhg\" (UID: \"edef4f76-66e3-4431-8f3e-15b1be7dc525\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.283311 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:53 crc kubenswrapper[4662]: E1208 09:27:53.283427 4662 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:53 crc kubenswrapper[4662]: E1208 09:27:53.283471 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert podName:a51352cf-c6f2-40cc-9b72-035737c28e0e nodeName:}" failed. No retries permitted until 2025-12-08 09:27:54.283458192 +0000 UTC m=+797.852486182 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert") pod "infra-operator-controller-manager-78d48bff9d-wgcnt" (UID: "a51352cf-c6f2-40cc-9b72-035737c28e0e") : secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.283913 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.286084 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.333154 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.335709 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.344081 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.385907 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6tqb\" (UniqueName: \"kubernetes.io/projected/edef4f76-66e3-4431-8f3e-15b1be7dc525-kube-api-access-g6tqb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-k7nhg\" (UID: \"edef4f76-66e3-4431-8f3e-15b1be7dc525\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.385981 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp5d6\" (UniqueName: \"kubernetes.io/projected/00caf547-24eb-4e92-9294-900fbf53f068-kube-api-access-bp5d6\") pod \"nova-operator-controller-manager-697bc559fc-qtdrq\" (UID: \"00caf547-24eb-4e92-9294-900fbf53f068\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.388127 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-xpnt6" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.388445 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.407886 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.409279 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.411384 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6kfrn" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.414812 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp5d6\" (UniqueName: \"kubernetes.io/projected/00caf547-24eb-4e92-9294-900fbf53f068-kube-api-access-bp5d6\") pod \"nova-operator-controller-manager-697bc559fc-qtdrq\" (UID: \"00caf547-24eb-4e92-9294-900fbf53f068\") " pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.420643 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.424544 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6tqb\" (UniqueName: \"kubernetes.io/projected/edef4f76-66e3-4431-8f3e-15b1be7dc525-kube-api-access-g6tqb\") pod \"neutron-operator-controller-manager-5fdfd5b6b5-k7nhg\" (UID: \"edef4f76-66e3-4431-8f3e-15b1be7dc525\") " pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.426305 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.427534 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-djwxx" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.486684 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgtmr\" (UniqueName: \"kubernetes.io/projected/99cf0df3-a30d-4a1b-aa55-d5a814afd119-kube-api-access-xgtmr\") pod \"octavia-operator-controller-manager-998648c74-9b5bw\" (UID: \"99cf0df3-a30d-4a1b-aa55-d5a814afd119\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.506796 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.510051 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.522134 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-tcdl5" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.522224 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.522560 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.536438 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.559393 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.591068 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.591371 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6wfj\" (UniqueName: \"kubernetes.io/projected/92ae4959-7652-4be1-9349-d1a1fbb32d68-kube-api-access-q6wfj\") pod \"placement-operator-controller-manager-78f8948974-v5zsp\" (UID: \"92ae4959-7652-4be1-9349-d1a1fbb32d68\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.591432 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgtmr\" (UniqueName: \"kubernetes.io/projected/99cf0df3-a30d-4a1b-aa55-d5a814afd119-kube-api-access-xgtmr\") pod \"octavia-operator-controller-manager-998648c74-9b5bw\" (UID: \"99cf0df3-a30d-4a1b-aa55-d5a814afd119\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.591546 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh7bv\" (UniqueName: \"kubernetes.io/projected/08f788cf-75bb-4beb-bb4e-f9fd39c18972-kube-api-access-mh7bv\") pod \"ovn-operator-controller-manager-b6456fdb6-8hlzg\" (UID: \"08f788cf-75bb-4beb-bb4e-f9fd39c18972\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.610822 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.643964 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.650957 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgtmr\" (UniqueName: \"kubernetes.io/projected/99cf0df3-a30d-4a1b-aa55-d5a814afd119-kube-api-access-xgtmr\") pod \"octavia-operator-controller-manager-998648c74-9b5bw\" (UID: \"99cf0df3-a30d-4a1b-aa55-d5a814afd119\") " pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.689235 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.692569 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q9fq\" (UniqueName: \"kubernetes.io/projected/078c0bad-4b25-4cdf-8d11-abfa0430137c-kube-api-access-5q9fq\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.692613 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh7bv\" (UniqueName: \"kubernetes.io/projected/08f788cf-75bb-4beb-bb4e-f9fd39c18972-kube-api-access-mh7bv\") pod \"ovn-operator-controller-manager-b6456fdb6-8hlzg\" (UID: \"08f788cf-75bb-4beb-bb4e-f9fd39c18972\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.692635 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.692667 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q6wfj\" (UniqueName: \"kubernetes.io/projected/92ae4959-7652-4be1-9349-d1a1fbb32d68-kube-api-access-q6wfj\") pod \"placement-operator-controller-manager-78f8948974-v5zsp\" (UID: \"92ae4959-7652-4be1-9349-d1a1fbb32d68\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.695497 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.698510 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-kbtn8" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.705802 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-knpnl"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.707002 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.712781 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-jzv79" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.717881 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh7bv\" (UniqueName: \"kubernetes.io/projected/08f788cf-75bb-4beb-bb4e-f9fd39c18972-kube-api-access-mh7bv\") pod \"ovn-operator-controller-manager-b6456fdb6-8hlzg\" (UID: \"08f788cf-75bb-4beb-bb4e-f9fd39c18972\") " pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.727287 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q6wfj\" (UniqueName: \"kubernetes.io/projected/92ae4959-7652-4be1-9349-d1a1fbb32d68-kube-api-access-q6wfj\") pod \"placement-operator-controller-manager-78f8948974-v5zsp\" (UID: \"92ae4959-7652-4be1-9349-d1a1fbb32d68\") " pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.759815 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.760161 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.774278 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-knpnl"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.784019 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.787007 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.788112 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.789882 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-8wqw2" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.792863 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.793948 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.795301 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9kx5\" (UniqueName: \"kubernetes.io/projected/7a4111f6-9632-4997-a71b-a514f200b5cf-kube-api-access-s9kx5\") pod \"swift-operator-controller-manager-9d58d64bc-kh2rv\" (UID: \"7a4111f6-9632-4997-a71b-a514f200b5cf\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.795337 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kldsz\" (UniqueName: \"kubernetes.io/projected/bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2-kube-api-access-kldsz\") pod \"test-operator-controller-manager-5854674fcc-knpnl\" (UID: \"bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.795370 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xlv9\" (UniqueName: \"kubernetes.io/projected/606d0c5a-d4b6-46c3-9cd8-367895904823-kube-api-access-5xlv9\") pod \"telemetry-operator-controller-manager-58d5ff84df-jnfw7\" (UID: \"606d0c5a-d4b6-46c3-9cd8-367895904823\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.795394 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pmv7\" (UniqueName: \"kubernetes.io/projected/19e2da99-64f8-48f3-974b-5a33bdbe683d-kube-api-access-4pmv7\") pod \"watcher-operator-controller-manager-667bd8d554-6xkzt\" (UID: \"19e2da99-64f8-48f3-974b-5a33bdbe683d\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.795418 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q9fq\" (UniqueName: \"kubernetes.io/projected/078c0bad-4b25-4cdf-8d11-abfa0430137c-kube-api-access-5q9fq\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.795445 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:53 crc kubenswrapper[4662]: E1208 09:27:53.795564 4662 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:53 crc kubenswrapper[4662]: E1208 09:27:53.795601 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert podName:078c0bad-4b25-4cdf-8d11-abfa0430137c nodeName:}" failed. No retries permitted until 2025-12-08 09:27:54.295588439 +0000 UTC m=+797.864616429 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fsnfnd" (UID: "078c0bad-4b25-4cdf-8d11-abfa0430137c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.797976 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.798351 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-sxhzn" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.806694 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.824847 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.873850 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q9fq\" (UniqueName: \"kubernetes.io/projected/078c0bad-4b25-4cdf-8d11-abfa0430137c-kube-api-access-5q9fq\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.876810 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.877580 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.880802 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.883973 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.884136 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.884285 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-gpkm9" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.893947 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.900881 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9kx5\" (UniqueName: \"kubernetes.io/projected/7a4111f6-9632-4997-a71b-a514f200b5cf-kube-api-access-s9kx5\") pod \"swift-operator-controller-manager-9d58d64bc-kh2rv\" (UID: \"7a4111f6-9632-4997-a71b-a514f200b5cf\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.902100 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kldsz\" (UniqueName: \"kubernetes.io/projected/bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2-kube-api-access-kldsz\") pod \"test-operator-controller-manager-5854674fcc-knpnl\" (UID: \"bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.903157 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xlv9\" (UniqueName: \"kubernetes.io/projected/606d0c5a-d4b6-46c3-9cd8-367895904823-kube-api-access-5xlv9\") pod \"telemetry-operator-controller-manager-58d5ff84df-jnfw7\" (UID: \"606d0c5a-d4b6-46c3-9cd8-367895904823\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.903320 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pmv7\" (UniqueName: \"kubernetes.io/projected/19e2da99-64f8-48f3-974b-5a33bdbe683d-kube-api-access-4pmv7\") pod \"watcher-operator-controller-manager-667bd8d554-6xkzt\" (UID: \"19e2da99-64f8-48f3-974b-5a33bdbe683d\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.908776 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.910407 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.922246 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-k7mpm" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.926212 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.947933 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kldsz\" (UniqueName: \"kubernetes.io/projected/bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2-kube-api-access-kldsz\") pod \"test-operator-controller-manager-5854674fcc-knpnl\" (UID: \"bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2\") " pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.956878 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pmv7\" (UniqueName: \"kubernetes.io/projected/19e2da99-64f8-48f3-974b-5a33bdbe683d-kube-api-access-4pmv7\") pod \"watcher-operator-controller-manager-667bd8d554-6xkzt\" (UID: \"19e2da99-64f8-48f3-974b-5a33bdbe683d\") " pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.975944 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg"] Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.976352 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xlv9\" (UniqueName: \"kubernetes.io/projected/606d0c5a-d4b6-46c3-9cd8-367895904823-kube-api-access-5xlv9\") pod \"telemetry-operator-controller-manager-58d5ff84df-jnfw7\" (UID: \"606d0c5a-d4b6-46c3-9cd8-367895904823\") " pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.978568 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9kx5\" (UniqueName: \"kubernetes.io/projected/7a4111f6-9632-4997-a71b-a514f200b5cf-kube-api-access-s9kx5\") pod \"swift-operator-controller-manager-9d58d64bc-kh2rv\" (UID: \"7a4111f6-9632-4997-a71b-a514f200b5cf\") " pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.989976 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" event={"ID":"b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c","Type":"ContainerStarted","Data":"2d4d04c593990a0308c8b4433eb791344d6d0dc51714999a31d1c92413ee4b43"} Dec 08 09:27:53 crc kubenswrapper[4662]: I1208 09:27:53.992812 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" event={"ID":"dda05715-875a-41d4-9ee6-c81406a965a9","Type":"ContainerStarted","Data":"d21e78c5f1ea5d0daaa98ce3452e3b68690bf7daedecd46241691165121bc7d7"} Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.011718 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.035448 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmx5z\" (UniqueName: \"kubernetes.io/projected/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-kube-api-access-vmx5z\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.035507 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.130098 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.136308 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rkvp\" (UniqueName: \"kubernetes.io/projected/4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8-kube-api-access-2rkvp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v66fn\" (UID: \"4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.136355 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmx5z\" (UniqueName: \"kubernetes.io/projected/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-kube-api-access-vmx5z\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.136377 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.136416 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.136524 4662 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.136568 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:27:54.636552325 +0000 UTC m=+798.205580315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "metrics-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.138042 4662 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.138077 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:27:54.638068236 +0000 UTC m=+798.207096226 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.150406 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.153052 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmx5z\" (UniqueName: \"kubernetes.io/projected/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-kube-api-access-vmx5z\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.174548 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.205467 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.238836 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.240069 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rkvp\" (UniqueName: \"kubernetes.io/projected/4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8-kube-api-access-2rkvp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v66fn\" (UID: \"4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.280450 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rkvp\" (UniqueName: \"kubernetes.io/projected/4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8-kube-api-access-2rkvp\") pod \"rabbitmq-cluster-operator-manager-668c99d594-v66fn\" (UID: \"4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" Dec 08 09:27:54 crc kubenswrapper[4662]: W1208 09:27:54.282008 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod022626c4_d3b3_4c80_884c_6ae24361955a.slice/crio-e12632a1bbc75014e9845dde50aab5ab87741ec5e503f369a37fd00dbf61d262 WatchSource:0}: Error finding container e12632a1bbc75014e9845dde50aab5ab87741ec5e503f369a37fd00dbf61d262: Status 404 returned error can't find the container with id e12632a1bbc75014e9845dde50aab5ab87741ec5e503f369a37fd00dbf61d262 Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.342345 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.342504 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.342606 4662 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.342672 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert podName:078c0bad-4b25-4cdf-8d11-abfa0430137c nodeName:}" failed. No retries permitted until 2025-12-08 09:27:55.342654392 +0000 UTC m=+798.911682382 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fsnfnd" (UID: "078c0bad-4b25-4cdf-8d11-abfa0430137c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.342711 4662 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.342779 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert podName:a51352cf-c6f2-40cc-9b72-035737c28e0e nodeName:}" failed. No retries permitted until 2025-12-08 09:27:56.342761955 +0000 UTC m=+799.911790005 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert") pod "infra-operator-controller-manager-78d48bff9d-wgcnt" (UID: "a51352cf-c6f2-40cc-9b72-035737c28e0e") : secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.558402 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.615543 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.642489 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.653339 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.653393 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.653527 4662 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.653571 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:27:55.653558363 +0000 UTC m=+799.222586353 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "metrics-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.653609 4662 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.653626 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:27:55.653621295 +0000 UTC m=+799.222649285 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "webhook-server-cert" not found Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.667729 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9"] Dec 08 09:27:54 crc kubenswrapper[4662]: W1208 09:27:54.727698 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0fcb859f_b723_4629_902c_68696b4b8995.slice/crio-c4856f38a227028ce5157a7f8d3ff2f1df3d3b045c5ab2a68a0051224d76b025 WatchSource:0}: Error finding container c4856f38a227028ce5157a7f8d3ff2f1df3d3b045c5ab2a68a0051224d76b025: Status 404 returned error can't find the container with id c4856f38a227028ce5157a7f8d3ff2f1df3d3b045c5ab2a68a0051224d76b025 Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.728171 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-967d97867-85425"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.728212 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.728227 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.745062 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.767788 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f"] Dec 08 09:27:54 crc kubenswrapper[4662]: W1208 09:27:54.790609 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2c718d76_ffa3_479f_972d_451437ca9b8e.slice/crio-6fb162a72daa00767b575d2487ad88532a9b1871163306b05488ba07da9858a2 WatchSource:0}: Error finding container 6fb162a72daa00767b575d2487ad88532a9b1871163306b05488ba07da9858a2: Status 404 returned error can't find the container with id 6fb162a72daa00767b575d2487ad88532a9b1871163306b05488ba07da9858a2 Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.796211 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.839429 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw"] Dec 08 09:27:54 crc kubenswrapper[4662]: W1208 09:27:54.844037 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedef4f76_66e3_4431_8f3e_15b1be7dc525.slice/crio-1e7d40cfbb34e5e90835a372645a24f23c70a22f0f2b2e2d864cfbde2428e12a WatchSource:0}: Error finding container 1e7d40cfbb34e5e90835a372645a24f23c70a22f0f2b2e2d864cfbde2428e12a: Status 404 returned error can't find the container with id 1e7d40cfbb34e5e90835a372645a24f23c70a22f0f2b2e2d864cfbde2428e12a Dec 08 09:27:54 crc kubenswrapper[4662]: W1208 09:27:54.845136 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08f788cf_75bb_4beb_bb4e_f9fd39c18972.slice/crio-4d5a8265d490acaeabb26b0b774255f74ddc04a48d6fc029054d5295d79744fa WatchSource:0}: Error finding container 4d5a8265d490acaeabb26b0b774255f74ddc04a48d6fc029054d5295d79744fa: Status 404 returned error can't find the container with id 4d5a8265d490acaeabb26b0b774255f74ddc04a48d6fc029054d5295d79744fa Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.850470 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg"] Dec 08 09:27:54 crc kubenswrapper[4662]: I1208 09:27:54.856280 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp"] Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.859818 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g6tqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-k7nhg_openstack-operators(edef4f76-66e3-4431-8f3e-15b1be7dc525): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.863494 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g6tqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-k7nhg_openstack-operators(edef4f76-66e3-4431-8f3e-15b1be7dc525): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.865301 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" podUID="edef4f76-66e3-4431-8f3e-15b1be7dc525" Dec 08 09:27:54 crc kubenswrapper[4662]: W1208 09:27:54.866858 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92ae4959_7652_4be1_9349_d1a1fbb32d68.slice/crio-3d19c8d1ddaab4d98efd96f404b0f68a52567eacd965b1dbe071a5671498fd03 WatchSource:0}: Error finding container 3d19c8d1ddaab4d98efd96f404b0f68a52567eacd965b1dbe071a5671498fd03: Status 404 returned error can't find the container with id 3d19c8d1ddaab4d98efd96f404b0f68a52567eacd965b1dbe071a5671498fd03 Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.879650 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6wfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-v5zsp_openstack-operators(92ae4959-7652-4be1-9349-d1a1fbb32d68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.881716 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6wfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-v5zsp_openstack-operators(92ae4959-7652-4be1-9349-d1a1fbb32d68): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:54 crc kubenswrapper[4662]: E1208 09:27:54.882897 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" podUID="92ae4959-7652-4be1-9349-d1a1fbb32d68" Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.011599 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" event={"ID":"edef4f76-66e3-4431-8f3e-15b1be7dc525","Type":"ContainerStarted","Data":"1e7d40cfbb34e5e90835a372645a24f23c70a22f0f2b2e2d864cfbde2428e12a"} Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.015152 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" podUID="edef4f76-66e3-4431-8f3e-15b1be7dc525" Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.017364 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" event={"ID":"60c3c9e4-042b-445f-96fe-7d4583ae29ee","Type":"ContainerStarted","Data":"55a3987a24a312f294a86ba233bf7bdcf8eb70543a717ff4cbd6e1f90c3cf73d"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.018761 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" event={"ID":"92ae4959-7652-4be1-9349-d1a1fbb32d68","Type":"ContainerStarted","Data":"3d19c8d1ddaab4d98efd96f404b0f68a52567eacd965b1dbe071a5671498fd03"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.022510 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" event={"ID":"99cf0df3-a30d-4a1b-aa55-d5a814afd119","Type":"ContainerStarted","Data":"dee3ee5cfaf659d18f3a24f946fcf5be51504a1089d2a4f8da52f78680bb2d3b"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.023387 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" event={"ID":"022626c4-d3b3-4c80-884c-6ae24361955a","Type":"ContainerStarted","Data":"e12632a1bbc75014e9845dde50aab5ab87741ec5e503f369a37fd00dbf61d262"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.024362 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" event={"ID":"08f788cf-75bb-4beb-bb4e-f9fd39c18972","Type":"ContainerStarted","Data":"4d5a8265d490acaeabb26b0b774255f74ddc04a48d6fc029054d5295d79744fa"} Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.025124 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" podUID="92ae4959-7652-4be1-9349-d1a1fbb32d68" Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.025570 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" event={"ID":"e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe","Type":"ContainerStarted","Data":"fff91c89b646729138d0e4543bd6c5adbad17026289b2b7fc1d335379cafb9ce"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.026359 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" event={"ID":"00caf547-24eb-4e92-9294-900fbf53f068","Type":"ContainerStarted","Data":"66afed826e822d4af95b434eee4696d146c95a0156bbe9a17d46eb1bc1072431"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.027054 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" event={"ID":"5a541aca-2d5d-432d-a375-d639af4927ee","Type":"ContainerStarted","Data":"99be75079f2f7dca7f31e4aa2750fd8d8c3dab2d4e43215b026f9b95ab1ab4c7"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.028151 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" event={"ID":"2c718d76-ffa3-479f-972d-451437ca9b8e","Type":"ContainerStarted","Data":"6fb162a72daa00767b575d2487ad88532a9b1871163306b05488ba07da9858a2"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.028960 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" event={"ID":"0fcb859f-b723-4629-902c-68696b4b8995","Type":"ContainerStarted","Data":"c4856f38a227028ce5157a7f8d3ff2f1df3d3b045c5ab2a68a0051224d76b025"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.029572 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" event={"ID":"41089106-3be5-42f3-9c98-76ddb4e0a32c","Type":"ContainerStarted","Data":"e214727b05c6602143872a101b6608e91c1c7dfc76443853e11edf0dd14febc0"} Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.029639 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv"] Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.030204 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" event={"ID":"3882e308-ba7b-48c8-94f0-354b3926c925","Type":"ContainerStarted","Data":"28d56692b48fb1ef1e08738ffada8d7feb89fdde9392cd5e7fde43b6118b8670"} Dec 08 09:27:55 crc kubenswrapper[4662]: W1208 09:27:55.076143 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a4111f6_9632_4997_a71b_a514f200b5cf.slice/crio-215f575a4716520f927efa66c180bcab23463eec268c688507c0c3b64e2734c5 WatchSource:0}: Error finding container 215f575a4716520f927efa66c180bcab23463eec268c688507c0c3b64e2734c5: Status 404 returned error can't find the container with id 215f575a4716520f927efa66c180bcab23463eec268c688507c0c3b64e2734c5 Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.083506 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-5854674fcc-knpnl"] Dec 08 09:27:55 crc kubenswrapper[4662]: W1208 09:27:55.115552 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf88a9c5_bcb3_4be6_be18_8418e7ac3fc2.slice/crio-f587bc6cf7008021073949842de0e0f3fb3fc362d407d6227071549f7279541b WatchSource:0}: Error finding container f587bc6cf7008021073949842de0e0f3fb3fc362d407d6227071549f7279541b: Status 404 returned error can't find the container with id f587bc6cf7008021073949842de0e0f3fb3fc362d407d6227071549f7279541b Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.120108 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kldsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-knpnl_openstack-operators(bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.122044 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kldsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-knpnl_openstack-operators(bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.123474 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" podUID="bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2" Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.123522 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt"] Dec 08 09:27:55 crc kubenswrapper[4662]: W1208 09:27:55.124122 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19e2da99_64f8_48f3_974b_5a33bdbe683d.slice/crio-887312d8ed4974b5090246180ba544857a298aac30607823fca46594f6174c72 WatchSource:0}: Error finding container 887312d8ed4974b5090246180ba544857a298aac30607823fca46594f6174c72: Status 404 returned error can't find the container with id 887312d8ed4974b5090246180ba544857a298aac30607823fca46594f6174c72 Dec 08 09:27:55 crc kubenswrapper[4662]: W1208 09:27:55.127427 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod606d0c5a_d4b6_46c3_9cd8_367895904823.slice/crio-45bb4055d055880a14a9f23085f6b009a970b75b99656721d9a1db94093fdf35 WatchSource:0}: Error finding container 45bb4055d055880a14a9f23085f6b009a970b75b99656721d9a1db94093fdf35: Status 404 returned error can't find the container with id 45bb4055d055880a14a9f23085f6b009a970b75b99656721d9a1db94093fdf35 Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.129378 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pmv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-6xkzt_openstack-operators(19e2da99-64f8-48f3-974b-5a33bdbe683d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.130003 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7"] Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.130466 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xlv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-jnfw7_openstack-operators(606d0c5a-d4b6-46c3-9cd8-367895904823): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.132929 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5xlv9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-58d5ff84df-jnfw7_openstack-operators(606d0c5a-d4b6-46c3-9cd8-367895904823): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.133228 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0,Command:[],Args:[--secure-listen-address=0.0.0.0:8443 --upstream=http://127.0.0.1:8080/ --logtostderr=true --v=0],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{134217728 0} {} BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pmv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-6xkzt_openstack-operators(19e2da99-64f8-48f3-974b-5a33bdbe683d): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.134053 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" podUID="606d0c5a-d4b6-46c3-9cd8-367895904823" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.134421 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" podUID="19e2da99-64f8-48f3-974b-5a33bdbe683d" Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.216085 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn"] Dec 08 09:27:55 crc kubenswrapper[4662]: W1208 09:27:55.229008 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4ea8fed4_0dca_430b_bb22_8a7e8fdee0b8.slice/crio-6b7f8e353a24d782092e4bd894b36470bb5c6028b772b6bc4b845003e1389d5b WatchSource:0}: Error finding container 6b7f8e353a24d782092e4bd894b36470bb5c6028b772b6bc4b845003e1389d5b: Status 404 returned error can't find the container with id 6b7f8e353a24d782092e4bd894b36470bb5c6028b772b6bc4b845003e1389d5b Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.375888 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.376048 4662 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.376106 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert podName:078c0bad-4b25-4cdf-8d11-abfa0430137c nodeName:}" failed. No retries permitted until 2025-12-08 09:27:57.376091034 +0000 UTC m=+800.945119014 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fsnfnd" (UID: "078c0bad-4b25-4cdf-8d11-abfa0430137c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.679655 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:55 crc kubenswrapper[4662]: I1208 09:27:55.679761 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.679871 4662 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.679935 4662 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.679955 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:27:57.679934535 +0000 UTC m=+801.248962515 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "webhook-server-cert" not found Dec 08 09:27:55 crc kubenswrapper[4662]: E1208 09:27:55.680004 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:27:57.679985317 +0000 UTC m=+801.249013357 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "metrics-server-cert" not found Dec 08 09:27:56 crc kubenswrapper[4662]: I1208 09:27:56.062602 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" event={"ID":"4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8","Type":"ContainerStarted","Data":"6b7f8e353a24d782092e4bd894b36470bb5c6028b772b6bc4b845003e1389d5b"} Dec 08 09:27:56 crc kubenswrapper[4662]: I1208 09:27:56.067722 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" event={"ID":"7a4111f6-9632-4997-a71b-a514f200b5cf","Type":"ContainerStarted","Data":"215f575a4716520f927efa66c180bcab23463eec268c688507c0c3b64e2734c5"} Dec 08 09:27:56 crc kubenswrapper[4662]: I1208 09:27:56.069541 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" event={"ID":"bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2","Type":"ContainerStarted","Data":"f587bc6cf7008021073949842de0e0f3fb3fc362d407d6227071549f7279541b"} Dec 08 09:27:56 crc kubenswrapper[4662]: I1208 09:27:56.072040 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" event={"ID":"19e2da99-64f8-48f3-974b-5a33bdbe683d","Type":"ContainerStarted","Data":"887312d8ed4974b5090246180ba544857a298aac30607823fca46594f6174c72"} Dec 08 09:27:56 crc kubenswrapper[4662]: E1208 09:27:56.072345 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" podUID="bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2" Dec 08 09:27:56 crc kubenswrapper[4662]: E1208 09:27:56.074946 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" podUID="19e2da99-64f8-48f3-974b-5a33bdbe683d" Dec 08 09:27:56 crc kubenswrapper[4662]: I1208 09:27:56.076582 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" event={"ID":"606d0c5a-d4b6-46c3-9cd8-367895904823","Type":"ContainerStarted","Data":"45bb4055d055880a14a9f23085f6b009a970b75b99656721d9a1db94093fdf35"} Dec 08 09:27:56 crc kubenswrapper[4662]: E1208 09:27:56.081803 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" podUID="edef4f76-66e3-4431-8f3e-15b1be7dc525" Dec 08 09:27:56 crc kubenswrapper[4662]: E1208 09:27:56.082059 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" podUID="92ae4959-7652-4be1-9349-d1a1fbb32d68" Dec 08 09:27:56 crc kubenswrapper[4662]: E1208 09:27:56.091485 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" podUID="606d0c5a-d4b6-46c3-9cd8-367895904823" Dec 08 09:27:56 crc kubenswrapper[4662]: I1208 09:27:56.395221 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:27:56 crc kubenswrapper[4662]: E1208 09:27:56.395474 4662 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:56 crc kubenswrapper[4662]: E1208 09:27:56.395573 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert podName:a51352cf-c6f2-40cc-9b72-035737c28e0e nodeName:}" failed. No retries permitted until 2025-12-08 09:28:00.395550669 +0000 UTC m=+803.964578719 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert") pod "infra-operator-controller-manager-78d48bff9d-wgcnt" (UID: "a51352cf-c6f2-40cc-9b72-035737c28e0e") : secret "infra-operator-webhook-server-cert" not found Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.096445 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f27e732ec1faee765461bf137d9be81278b2fa39675019a73622755e1e610b6f\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" podUID="606d0c5a-d4b6-46c3-9cd8-367895904823" Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.097299 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" podUID="bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2" Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.098342 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/kube-rbac-proxy:v0.16.0\\\"\"]" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" podUID="19e2da99-64f8-48f3-974b-5a33bdbe683d" Dec 08 09:27:57 crc kubenswrapper[4662]: I1208 09:27:57.414021 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.414294 4662 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.414343 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert podName:078c0bad-4b25-4cdf-8d11-abfa0430137c nodeName:}" failed. No retries permitted until 2025-12-08 09:28:01.414328075 +0000 UTC m=+804.983356065 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fsnfnd" (UID: "078c0bad-4b25-4cdf-8d11-abfa0430137c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:27:57 crc kubenswrapper[4662]: I1208 09:27:57.726624 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:57 crc kubenswrapper[4662]: I1208 09:27:57.726732 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.726876 4662 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.726930 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:28:01.726914073 +0000 UTC m=+805.295942063 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "metrics-server-cert" not found Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.728015 4662 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 08 09:27:57 crc kubenswrapper[4662]: E1208 09:27:57.728059 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:28:01.728039784 +0000 UTC m=+805.297067774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "webhook-server-cert" not found Dec 08 09:28:00 crc kubenswrapper[4662]: I1208 09:28:00.464020 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:28:00 crc kubenswrapper[4662]: E1208 09:28:00.464155 4662 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Dec 08 09:28:00 crc kubenswrapper[4662]: E1208 09:28:00.464417 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert podName:a51352cf-c6f2-40cc-9b72-035737c28e0e nodeName:}" failed. No retries permitted until 2025-12-08 09:28:08.464401164 +0000 UTC m=+812.033429154 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert") pod "infra-operator-controller-manager-78d48bff9d-wgcnt" (UID: "a51352cf-c6f2-40cc-9b72-035737c28e0e") : secret "infra-operator-webhook-server-cert" not found Dec 08 09:28:01 crc kubenswrapper[4662]: I1208 09:28:01.515477 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:28:01 crc kubenswrapper[4662]: E1208 09:28:01.515798 4662 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:28:01 crc kubenswrapper[4662]: E1208 09:28:01.515873 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert podName:078c0bad-4b25-4cdf-8d11-abfa0430137c nodeName:}" failed. No retries permitted until 2025-12-08 09:28:09.515849874 +0000 UTC m=+813.084877874 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert") pod "openstack-baremetal-operator-controller-manager-84b575879fsnfnd" (UID: "078c0bad-4b25-4cdf-8d11-abfa0430137c") : secret "openstack-baremetal-operator-webhook-server-cert" not found Dec 08 09:28:01 crc kubenswrapper[4662]: I1208 09:28:01.819186 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:01 crc kubenswrapper[4662]: I1208 09:28:01.819330 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:01 crc kubenswrapper[4662]: E1208 09:28:01.819479 4662 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 08 09:28:01 crc kubenswrapper[4662]: E1208 09:28:01.819569 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:28:09.819547411 +0000 UTC m=+813.388575461 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "webhook-server-cert" not found Dec 08 09:28:01 crc kubenswrapper[4662]: E1208 09:28:01.820243 4662 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 08 09:28:01 crc kubenswrapper[4662]: E1208 09:28:01.820631 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:28:09.82061579 +0000 UTC m=+813.389643780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "metrics-server-cert" not found Dec 08 09:28:07 crc kubenswrapper[4662]: E1208 09:28:07.359347 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a" Dec 08 09:28:07 crc kubenswrapper[4662]: E1208 09:28:07.360011 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:44126f9c6b1d2bf752ddf989e20a4fc4cc1c07723d4fcb78465ccb2f55da6b3a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67jnm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-5b5fd79c9c-jgbb9_openstack-operators(41089106-3be5-42f3-9c98-76ddb4e0a32c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:08 crc kubenswrapper[4662]: I1208 09:28:08.485871 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:28:08 crc kubenswrapper[4662]: I1208 09:28:08.501156 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a51352cf-c6f2-40cc-9b72-035737c28e0e-cert\") pod \"infra-operator-controller-manager-78d48bff9d-wgcnt\" (UID: \"a51352cf-c6f2-40cc-9b72-035737c28e0e\") " pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:28:08 crc kubenswrapper[4662]: I1208 09:28:08.780870 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.027831 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027" Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.028064 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:5370dc4a8e776923eec00bb50cbdb2e390e9dde50be26bdc04a216bd2d6b5027,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4mr4j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-5697bb5779-hrlb9_openstack-operators(e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.512219 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a" Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.512445 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:900050d3501c0785b227db34b89883efe68247816e5c7427cacb74f8aa10605a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fb5m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-697fb699cf-wr5zg_openstack-operators(dda05715-875a-41d4-9ee6-c81406a965a9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:09 crc kubenswrapper[4662]: I1208 09:28:09.601864 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:28:09 crc kubenswrapper[4662]: I1208 09:28:09.607435 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/078c0bad-4b25-4cdf-8d11-abfa0430137c-cert\") pod \"openstack-baremetal-operator-controller-manager-84b575879fsnfnd\" (UID: \"078c0bad-4b25-4cdf-8d11-abfa0430137c\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:28:09 crc kubenswrapper[4662]: I1208 09:28:09.762250 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:28:09 crc kubenswrapper[4662]: I1208 09:28:09.905149 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:09 crc kubenswrapper[4662]: I1208 09:28:09.905612 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.905457 4662 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.905771 4662 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.905820 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:28:25.905776879 +0000 UTC m=+829.474805039 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "webhook-server-cert" not found Dec 08 09:28:09 crc kubenswrapper[4662]: E1208 09:28:09.905859 4662 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs podName:91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60 nodeName:}" failed. No retries permitted until 2025-12-08 09:28:25.905844921 +0000 UTC m=+829.474872911 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs") pod "openstack-operator-controller-manager-8999f4b55-mhd69" (UID: "91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60") : secret "metrics-server-cert" not found Dec 08 09:28:12 crc kubenswrapper[4662]: E1208 09:28:12.221672 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87" Dec 08 09:28:12 crc kubenswrapper[4662]: E1208 09:28:12.222147 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:5bdb3685be3ddc1efd62e16aaf2fa96ead64315e26d52b1b2a7d8ac01baa1e87,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hhh48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-967d97867-85425_openstack-operators(5a541aca-2d5d-432d-a375-d639af4927ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:12 crc kubenswrapper[4662]: E1208 09:28:12.969530 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Dec 08 09:28:12 crc kubenswrapper[4662]: E1208 09:28:12.969719 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rkvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-v66fn_openstack-operators(4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:12 crc kubenswrapper[4662]: E1208 09:28:12.971102 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" podUID="4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8" Dec 08 09:28:13 crc kubenswrapper[4662]: E1208 09:28:13.248012 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" podUID="4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8" Dec 08 09:28:14 crc kubenswrapper[4662]: E1208 09:28:14.810909 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea" Dec 08 09:28:14 crc kubenswrapper[4662]: E1208 09:28:14.811475 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/barbican-operator@sha256:f6059a0fbf031d34dcf086d14ce8c0546caeaee23c5780e90b5037c5feee9fea,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sb5pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-operator-controller-manager-7d9dfd778-pnmbz_openstack-operators(022626c4-d3b3-4c80-884c-6ae24361955a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:15 crc kubenswrapper[4662]: E1208 09:28:15.506626 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991" Dec 08 09:28:15 crc kubenswrapper[4662]: E1208 09:28:15.507045 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:3aa109bb973253ae9dcf339b9b65abbd1176cdb4be672c93e538a5f113816991,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s9kx5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9d58d64bc-kh2rv_openstack-operators(7a4111f6-9632-4997-a71b-a514f200b5cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:18 crc kubenswrapper[4662]: E1208 09:28:18.843083 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59" Dec 08 09:28:18 crc kubenswrapper[4662]: E1208 09:28:18.843547 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:635a4aef9d6f0b799e8ec91333dbb312160c001d05b3c63f614c124e0b67cb59,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mh7bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-b6456fdb6-8hlzg_openstack-operators(08f788cf-75bb-4beb-bb4e-f9fd39c18972): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:19 crc kubenswrapper[4662]: E1208 09:28:19.406582 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429" Dec 08 09:28:19 crc kubenswrapper[4662]: E1208 09:28:19.406774 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:c4abfc148600dfa85915f3dc911d988ea2335f26cb6b8d749fe79bfe53e5e429,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sqv24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-5f64f6f8bb-zsdxn_openstack-operators(0fcb859f-b723-4629-902c-68696b4b8995): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:20 crc kubenswrapper[4662]: E1208 09:28:20.734073 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5" Dec 08 09:28:20 crc kubenswrapper[4662]: E1208 09:28:20.734689 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:9e847f4dbdea19ab997f32a02b3680a9bd966f9c705911645c3866a19fda9ea5,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b47w8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-68c6d99b8f-4kwdx_openstack-operators(3882e308-ba7b-48c8-94f0-354b3926c925): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:21 crc kubenswrapper[4662]: E1208 09:28:21.269290 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670" Dec 08 09:28:21 crc kubenswrapper[4662]: E1208 09:28:21.269468 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:779f0cee6024d0fb8f259b036fe790e62aa5a3b0431ea9bf15a6e7d02e2e5670,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bp5d6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-697bc559fc-qtdrq_openstack-operators(00caf547-24eb-4e92-9294-900fbf53f068): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:21 crc kubenswrapper[4662]: E1208 09:28:21.791726 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7" Dec 08 09:28:21 crc kubenswrapper[4662]: E1208 09:28:21.791934 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:72ad6517987f674af0d0ae092cbb874aeae909c8b8b60188099c311762ebc8f7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4t4cc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-7765d96ddf-8zz9f_openstack-operators(60c3c9e4-042b-445f-96fe-7d4583ae29ee): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:22 crc kubenswrapper[4662]: E1208 09:28:22.559662 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8" Dec 08 09:28:22 crc kubenswrapper[4662]: E1208 09:28:22.560069 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4pmv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-667bd8d554-6xkzt_openstack-operators(19e2da99-64f8-48f3-974b-5a33bdbe683d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:23 crc kubenswrapper[4662]: E1208 09:28:23.141933 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94" Dec 08 09:28:23 crc kubenswrapper[4662]: E1208 09:28:23.142098 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kldsz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-5854674fcc-knpnl_openstack-operators(bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:25 crc kubenswrapper[4662]: I1208 09:28:25.953056 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:25 crc kubenswrapper[4662]: I1208 09:28:25.954146 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:25 crc kubenswrapper[4662]: I1208 09:28:25.970471 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-webhook-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:25 crc kubenswrapper[4662]: I1208 09:28:25.976423 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60-metrics-certs\") pod \"openstack-operator-controller-manager-8999f4b55-mhd69\" (UID: \"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60\") " pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:26 crc kubenswrapper[4662]: I1208 09:28:26.036947 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:27 crc kubenswrapper[4662]: E1208 09:28:27.278257 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f" Dec 08 09:28:27 crc kubenswrapper[4662]: E1208 09:28:27.278457 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q6wfj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-78f8948974-v5zsp_openstack-operators(92ae4959-7652-4be1-9349-d1a1fbb32d68): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:28 crc kubenswrapper[4662]: E1208 09:28:28.621961 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557" Dec 08 09:28:28 crc kubenswrapper[4662]: E1208 09:28:28.622410 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g6tqb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-5fdfd5b6b5-k7nhg_openstack-operators(edef4f76-66e3-4431-8f3e-15b1be7dc525): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:28:29 crc kubenswrapper[4662]: I1208 09:28:29.220292 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt"] Dec 08 09:28:29 crc kubenswrapper[4662]: I1208 09:28:29.239078 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd"] Dec 08 09:28:29 crc kubenswrapper[4662]: W1208 09:28:29.334981 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod078c0bad_4b25_4cdf_8d11_abfa0430137c.slice/crio-ccadcb3b75029408031f7a1d3b7d0ed93ad6aef074e3c030331f511ed46b238d WatchSource:0}: Error finding container ccadcb3b75029408031f7a1d3b7d0ed93ad6aef074e3c030331f511ed46b238d: Status 404 returned error can't find the container with id ccadcb3b75029408031f7a1d3b7d0ed93ad6aef074e3c030331f511ed46b238d Dec 08 09:28:29 crc kubenswrapper[4662]: I1208 09:28:29.369873 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" event={"ID":"a51352cf-c6f2-40cc-9b72-035737c28e0e","Type":"ContainerStarted","Data":"b6af9fd548cd4fcdc661f4686a36976e9b7b25072648e6d97444a7da9f15f078"} Dec 08 09:28:29 crc kubenswrapper[4662]: I1208 09:28:29.371494 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" event={"ID":"078c0bad-4b25-4cdf-8d11-abfa0430137c","Type":"ContainerStarted","Data":"ccadcb3b75029408031f7a1d3b7d0ed93ad6aef074e3c030331f511ed46b238d"} Dec 08 09:28:29 crc kubenswrapper[4662]: I1208 09:28:29.812452 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69"] Dec 08 09:28:30 crc kubenswrapper[4662]: I1208 09:28:30.396331 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" event={"ID":"99cf0df3-a30d-4a1b-aa55-d5a814afd119","Type":"ContainerStarted","Data":"03e4298e01a2d7e5e9ce3a080e78c321a2922b8f3fac855e76aa8a6254e47120"} Dec 08 09:28:30 crc kubenswrapper[4662]: I1208 09:28:30.400916 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" event={"ID":"b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c","Type":"ContainerStarted","Data":"6f2a735dae56b6a13895de1f85a16d16f7b325f3f090f56198be7e30c22768c2"} Dec 08 09:28:30 crc kubenswrapper[4662]: I1208 09:28:30.403244 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" event={"ID":"2c718d76-ffa3-479f-972d-451437ca9b8e","Type":"ContainerStarted","Data":"0a2e541235d8f0bc1ded0800dc9367c260e39aea94e998ec528a62ac5f3e6f0c"} Dec 08 09:28:31 crc kubenswrapper[4662]: I1208 09:28:31.414297 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" event={"ID":"606d0c5a-d4b6-46c3-9cd8-367895904823","Type":"ContainerStarted","Data":"b614855e76551aed35e96f8014aa63bc1d78618a6aabf5670c50b08040a6b0ab"} Dec 08 09:28:31 crc kubenswrapper[4662]: I1208 09:28:31.415905 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" event={"ID":"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60","Type":"ContainerStarted","Data":"94dc2e955b65e2b339a1c37ddf23dedacdee3c182786b84f26fa9fb60ff584d8"} Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.308029 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" podUID="92ae4959-7652-4be1-9349-d1a1fbb32d68" Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.337073 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" podUID="e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe" Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.441236 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" event={"ID":"91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60","Type":"ContainerStarted","Data":"436c62c9175af68baeda2a4478d3fc95173a3dfc52ccb537608b2f264998cc2e"} Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.442513 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.444544 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" event={"ID":"92ae4959-7652-4be1-9349-d1a1fbb32d68","Type":"ContainerStarted","Data":"f60725cbb834d60c7092e4fecc398753532ea5fd21eb6dd4919d1f434928defc"} Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.462238 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" event={"ID":"2c718d76-ffa3-479f-972d-451437ca9b8e","Type":"ContainerStarted","Data":"05717593a5ee5aea93d63c4dc9f52c07c22e171f102ac5ccb4825af2a087604d"} Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.462355 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.464325 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" event={"ID":"4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8","Type":"ContainerStarted","Data":"ed6a8f494205f73c58458cea7ac9ffa3b30533c3e67a1ce31480cbd53437b5b6"} Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.466265 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" event={"ID":"e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe","Type":"ContainerStarted","Data":"c92242a46de4423a601f43e8111bca9abdcdfd05a92883454ae2def54ead546e"} Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.474403 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" podStartSLOduration=39.474385892 podStartE2EDuration="39.474385892s" podCreationTimestamp="2025-12-08 09:27:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:28:32.472314856 +0000 UTC m=+836.041342846" watchObservedRunningTime="2025-12-08 09:28:32.474385892 +0000 UTC m=+836.043413882" Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.530213 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-v66fn" podStartSLOduration=2.751200393 podStartE2EDuration="39.530195742s" podCreationTimestamp="2025-12-08 09:27:53 +0000 UTC" firstStartedPulling="2025-12-08 09:27:55.252554981 +0000 UTC m=+798.821582971" lastFinishedPulling="2025-12-08 09:28:32.03155033 +0000 UTC m=+835.600578320" observedRunningTime="2025-12-08 09:28:32.523856951 +0000 UTC m=+836.092884941" watchObservedRunningTime="2025-12-08 09:28:32.530195742 +0000 UTC m=+836.099223732" Dec 08 09:28:32 crc kubenswrapper[4662]: I1208 09:28:32.575008 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" podStartSLOduration=3.420057174 podStartE2EDuration="40.574990154s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.816444471 +0000 UTC m=+798.385472461" lastFinishedPulling="2025-12-08 09:28:31.971377451 +0000 UTC m=+835.540405441" observedRunningTime="2025-12-08 09:28:32.554215052 +0000 UTC m=+836.123243042" watchObservedRunningTime="2025-12-08 09:28:32.574990154 +0000 UTC m=+836.144018144" Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.587874 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" podUID="92ae4959-7652-4be1-9349-d1a1fbb32d68" Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.618541 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" podUID="19e2da99-64f8-48f3-974b-5a33bdbe683d" Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.639346 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" podUID="08f788cf-75bb-4beb-bb4e-f9fd39c18972" Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.657985 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" podUID="edef4f76-66e3-4431-8f3e-15b1be7dc525" Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.753058 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" podUID="bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2" Dec 08 09:28:32 crc kubenswrapper[4662]: E1208 09:28:32.994389 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" podUID="41089106-3be5-42f3-9c98-76ddb4e0a32c" Dec 08 09:28:33 crc kubenswrapper[4662]: E1208 09:28:33.262496 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" podUID="3882e308-ba7b-48c8-94f0-354b3926c925" Dec 08 09:28:33 crc kubenswrapper[4662]: E1208 09:28:33.298715 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" podUID="022626c4-d3b3-4c80-884c-6ae24361955a" Dec 08 09:28:33 crc kubenswrapper[4662]: E1208 09:28:33.318347 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" podUID="dda05715-875a-41d4-9ee6-c81406a965a9" Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.481389 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" event={"ID":"606d0c5a-d4b6-46c3-9cd8-367895904823","Type":"ContainerStarted","Data":"3a1cbe5ba0b85d894c0cd075f37fc449a3cd15e5d6b1e07e2cf6571eeac6a7de"} Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.482426 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.485956 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" event={"ID":"99cf0df3-a30d-4a1b-aa55-d5a814afd119","Type":"ContainerStarted","Data":"4e88604cc0100c09e25ad1443bceb41bb08155230a90cacbc54346e8457be115"} Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.486094 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.487427 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" event={"ID":"08f788cf-75bb-4beb-bb4e-f9fd39c18972","Type":"ContainerStarted","Data":"084f41199ee60f1acbf59f0323db1e38938b39d1a56fa82239b7a4e7f4a79e29"} Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.494533 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" event={"ID":"bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2","Type":"ContainerStarted","Data":"3143b9850d4e5cfe7d53833d76ea38b19100338042b34802fbc052edd95c8fea"} Dec 08 09:28:33 crc kubenswrapper[4662]: E1208 09:28:33.496113 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:101b3e007d8c9f2e183262d7712f986ad51256448099069bc14f1ea5f997ab94\\\"\"" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" podUID="bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2" Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.496320 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" event={"ID":"19e2da99-64f8-48f3-974b-5a33bdbe683d","Type":"ContainerStarted","Data":"d179dfcb7c79a489f374b7e6541bc080cf97d6725de9137e2b705407d5f9a90f"} Dec 08 09:28:33 crc kubenswrapper[4662]: E1208 09:28:33.497512 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:6b3e0302608a2e70f9b5ae9167f6fbf59264f226d9db99d48f70466ab2f216b8\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" podUID="19e2da99-64f8-48f3-974b-5a33bdbe683d" Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.509271 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" event={"ID":"41089106-3be5-42f3-9c98-76ddb4e0a32c","Type":"ContainerStarted","Data":"fed9bd3a6982071bcf470aeff52e8411572cc16440d4a581abcc85f569ff87fc"} Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.516976 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" event={"ID":"3882e308-ba7b-48c8-94f0-354b3926c925","Type":"ContainerStarted","Data":"72c82e5079e12d2b09d5cb6fc65c4a5ea0a496167097e53233f1cff11dff2e31"} Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.533218 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" podStartSLOduration=3.060914074 podStartE2EDuration="40.533206682s" podCreationTimestamp="2025-12-08 09:27:53 +0000 UTC" firstStartedPulling="2025-12-08 09:27:55.130394786 +0000 UTC m=+798.699422776" lastFinishedPulling="2025-12-08 09:28:32.602687394 +0000 UTC m=+836.171715384" observedRunningTime="2025-12-08 09:28:33.50468374 +0000 UTC m=+837.073711730" watchObservedRunningTime="2025-12-08 09:28:33.533206682 +0000 UTC m=+837.102234672" Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.535378 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" event={"ID":"022626c4-d3b3-4c80-884c-6ae24361955a","Type":"ContainerStarted","Data":"c7e11caee7a080229519f4df85bdfe340e0e89d24b15775833838ec7d17bfa79"} Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.540690 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" event={"ID":"edef4f76-66e3-4431-8f3e-15b1be7dc525","Type":"ContainerStarted","Data":"79a3ba8168c834678643f4856f9113cb2ded973d11969738434ca7a49b864db8"} Dec 08 09:28:33 crc kubenswrapper[4662]: E1208 09:28:33.541978 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" podUID="edef4f76-66e3-4431-8f3e-15b1be7dc525" Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.545630 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" event={"ID":"dda05715-875a-41d4-9ee6-c81406a965a9","Type":"ContainerStarted","Data":"c4f3baf7c306530a332cb11ef6bd4db66feeab08caa67aa78dc68dcce209869b"} Dec 08 09:28:33 crc kubenswrapper[4662]: I1208 09:28:33.554376 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" podStartSLOduration=3.8086511979999997 podStartE2EDuration="41.554359924s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.859463395 +0000 UTC m=+798.428491385" lastFinishedPulling="2025-12-08 09:28:32.605172121 +0000 UTC m=+836.174200111" observedRunningTime="2025-12-08 09:28:33.545479024 +0000 UTC m=+837.114507014" watchObservedRunningTime="2025-12-08 09:28:33.554359924 +0000 UTC m=+837.123387914" Dec 08 09:28:34 crc kubenswrapper[4662]: I1208 09:28:34.554782 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-998648c74-9b5bw" Dec 08 09:28:35 crc kubenswrapper[4662]: I1208 09:28:35.597401 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" event={"ID":"3882e308-ba7b-48c8-94f0-354b3926c925","Type":"ContainerStarted","Data":"be38ef57d1dba5b3cefc63e47ee21d439ccc3b9361255bf9f1df6ce5e96a6e90"} Dec 08 09:28:35 crc kubenswrapper[4662]: I1208 09:28:35.598781 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" Dec 08 09:28:35 crc kubenswrapper[4662]: I1208 09:28:35.604119 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" Dec 08 09:28:35 crc kubenswrapper[4662]: I1208 09:28:35.626056 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" event={"ID":"078c0bad-4b25-4cdf-8d11-abfa0430137c","Type":"ContainerStarted","Data":"93177add735f2ada4f98466f836b5725a83091491c0867f869df5827d8073033"} Dec 08 09:28:35 crc kubenswrapper[4662]: I1208 09:28:35.662760 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" podStartSLOduration=3.406284451 podStartE2EDuration="43.662723932s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.814658023 +0000 UTC m=+798.383686013" lastFinishedPulling="2025-12-08 09:28:35.071097504 +0000 UTC m=+838.640125494" observedRunningTime="2025-12-08 09:28:35.625748602 +0000 UTC m=+839.194776592" watchObservedRunningTime="2025-12-08 09:28:35.662723932 +0000 UTC m=+839.231751922" Dec 08 09:28:35 crc kubenswrapper[4662]: I1208 09:28:35.671194 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" podStartSLOduration=4.067520663 podStartE2EDuration="43.671179521s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.816253416 +0000 UTC m=+798.385281406" lastFinishedPulling="2025-12-08 09:28:34.419912274 +0000 UTC m=+837.988940264" observedRunningTime="2025-12-08 09:28:35.650609395 +0000 UTC m=+839.219637385" watchObservedRunningTime="2025-12-08 09:28:35.671179521 +0000 UTC m=+839.240207511" Dec 08 09:28:35 crc kubenswrapper[4662]: E1208 09:28:35.702532 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" podUID="7a4111f6-9632-4997-a71b-a514f200b5cf" Dec 08 09:28:35 crc kubenswrapper[4662]: E1208 09:28:35.921020 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" podUID="5a541aca-2d5d-432d-a375-d639af4927ee" Dec 08 09:28:35 crc kubenswrapper[4662]: E1208 09:28:35.961717 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" podUID="0fcb859f-b723-4629-902c-68696b4b8995" Dec 08 09:28:35 crc kubenswrapper[4662]: E1208 09:28:35.965586 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" podUID="00caf547-24eb-4e92-9294-900fbf53f068" Dec 08 09:28:35 crc kubenswrapper[4662]: E1208 09:28:35.987267 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" podUID="60c3c9e4-042b-445f-96fe-7d4583ae29ee" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.051780 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-8999f4b55-mhd69" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.624560 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" event={"ID":"b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c","Type":"ContainerStarted","Data":"8c097d6033ff6e232bc62d1fa5276bf3593491521841ff05d6b7b0ee1737c0c2"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.624763 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.628961 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" event={"ID":"0fcb859f-b723-4629-902c-68696b4b8995","Type":"ContainerStarted","Data":"0aceb3e899eb6ac2dfb884bbd17c1bff926af6aee521a38b323cf1d5a5eb35cb"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.633861 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" event={"ID":"a51352cf-c6f2-40cc-9b72-035737c28e0e","Type":"ContainerStarted","Data":"52fefb771f2136672095e6338a803ad93eb24f880b1d0e89569679855c628be0"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.633899 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" event={"ID":"a51352cf-c6f2-40cc-9b72-035737c28e0e","Type":"ContainerStarted","Data":"5347e12f423dcc6422c596ac5661e08a100fb89b4ac4775e138efc5148eb9027"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.634510 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.636207 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" event={"ID":"08f788cf-75bb-4beb-bb4e-f9fd39c18972","Type":"ContainerStarted","Data":"95e6fb8a4a3bc40ac2b1ef61188ebe1b66f31664df44355b725e591faca8959e"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.636705 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.638420 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" event={"ID":"00caf547-24eb-4e92-9294-900fbf53f068","Type":"ContainerStarted","Data":"baaf5225f87204fc39f07ba00a04d5ca842d72a26a20f596df89b979420c6461"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.643650 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" event={"ID":"078c0bad-4b25-4cdf-8d11-abfa0430137c","Type":"ContainerStarted","Data":"84af9a8164f1f3e9d0417d75ce6ac4f69577dac2b2341d2a28b666e5921bc87e"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.644028 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.644942 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.645938 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" event={"ID":"dda05715-875a-41d4-9ee6-c81406a965a9","Type":"ContainerStarted","Data":"bbcc41b559a8a0723bc32498122243fe064a69be68e26a49a996b50337653af3"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.646077 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.649402 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-6c677c69b-l9m5c" podStartSLOduration=3.9450517510000003 podStartE2EDuration="44.64938892s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:53.715596295 +0000 UTC m=+797.284624285" lastFinishedPulling="2025-12-08 09:28:34.419933464 +0000 UTC m=+837.988961454" observedRunningTime="2025-12-08 09:28:36.648198948 +0000 UTC m=+840.217226938" watchObservedRunningTime="2025-12-08 09:28:36.64938892 +0000 UTC m=+840.218416910" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.649846 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" event={"ID":"41089106-3be5-42f3-9c98-76ddb4e0a32c","Type":"ContainerStarted","Data":"f1f43e466021cb4fca0e78661180dd0f76a509c6b7a2a2e11e1810f4098a9b4b"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.650251 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.653676 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" event={"ID":"e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe","Type":"ContainerStarted","Data":"ec581ba3e2fd33bd667db36d928761d67f69b4969f2799f97f1be53bc9263378"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.658103 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" event={"ID":"7a4111f6-9632-4997-a71b-a514f200b5cf","Type":"ContainerStarted","Data":"57e36b32480a819e30df54382bff94482549204cc33194e710762dee19de4c52"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.662854 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" event={"ID":"60c3c9e4-042b-445f-96fe-7d4583ae29ee","Type":"ContainerStarted","Data":"63c2f431ff245c87d2779bb7cca583d3c982529c413e218308510de10c1ae7b6"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.666270 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" event={"ID":"5a541aca-2d5d-432d-a375-d639af4927ee","Type":"ContainerStarted","Data":"a981f8e2b1c7419c15aeca3d79a33fced652af5db0661d9e7074053cbee7fc45"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.675110 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" event={"ID":"022626c4-d3b3-4c80-884c-6ae24361955a","Type":"ContainerStarted","Data":"7b59cd96eb46ccef64fa03711234b842ac5b121f4f2f16d34369cd1ee13e3ec8"} Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.675226 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.694193 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" podStartSLOduration=3.385381926 podStartE2EDuration="44.694179302s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:53.762851823 +0000 UTC m=+797.331879813" lastFinishedPulling="2025-12-08 09:28:35.071649199 +0000 UTC m=+838.640677189" observedRunningTime="2025-12-08 09:28:36.691350735 +0000 UTC m=+840.260378725" watchObservedRunningTime="2025-12-08 09:28:36.694179302 +0000 UTC m=+840.263207292" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.747484 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" podStartSLOduration=38.7424422 podStartE2EDuration="44.747468274s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:28:29.347766132 +0000 UTC m=+832.916794122" lastFinishedPulling="2025-12-08 09:28:35.352792156 +0000 UTC m=+838.921820196" observedRunningTime="2025-12-08 09:28:36.744123073 +0000 UTC m=+840.313151063" watchObservedRunningTime="2025-12-08 09:28:36.747468274 +0000 UTC m=+840.316496264" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.849026 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" podStartSLOduration=39.776577791 podStartE2EDuration="44.849009921s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:28:29.348551383 +0000 UTC m=+832.917579373" lastFinishedPulling="2025-12-08 09:28:34.420983523 +0000 UTC m=+837.990011503" observedRunningTime="2025-12-08 09:28:36.847218823 +0000 UTC m=+840.416246813" watchObservedRunningTime="2025-12-08 09:28:36.849009921 +0000 UTC m=+840.418037911" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.940563 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" podStartSLOduration=4.53754875 podStartE2EDuration="44.940548128s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.853203175 +0000 UTC m=+798.422231165" lastFinishedPulling="2025-12-08 09:28:35.256202553 +0000 UTC m=+838.825230543" observedRunningTime="2025-12-08 09:28:36.913831115 +0000 UTC m=+840.482859115" watchObservedRunningTime="2025-12-08 09:28:36.940548128 +0000 UTC m=+840.509576118" Dec 08 09:28:36 crc kubenswrapper[4662]: I1208 09:28:36.969073 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" podStartSLOduration=3.97681721 podStartE2EDuration="44.969052999s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.283887202 +0000 UTC m=+797.852915192" lastFinishedPulling="2025-12-08 09:28:35.276122951 +0000 UTC m=+838.845150981" observedRunningTime="2025-12-08 09:28:36.966056458 +0000 UTC m=+840.535084458" watchObservedRunningTime="2025-12-08 09:28:36.969052999 +0000 UTC m=+840.538080989" Dec 08 09:28:37 crc kubenswrapper[4662]: I1208 09:28:37.010596 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" podStartSLOduration=4.570265976 podStartE2EDuration="45.010577253s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.814379655 +0000 UTC m=+798.383407645" lastFinishedPulling="2025-12-08 09:28:35.254690932 +0000 UTC m=+838.823718922" observedRunningTime="2025-12-08 09:28:37.006067611 +0000 UTC m=+840.575095601" watchObservedRunningTime="2025-12-08 09:28:37.010577253 +0000 UTC m=+840.579605243" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.707614 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" event={"ID":"5a541aca-2d5d-432d-a375-d639af4927ee","Type":"ContainerStarted","Data":"4be0178b03ce467665c766d49753f845fb9de66022020582c7e36d259d8b0a06"} Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.707960 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" event={"ID":"0fcb859f-b723-4629-902c-68696b4b8995","Type":"ContainerStarted","Data":"b21ef2382dcafc69e34fe913d8081faa8cb383c6df9e62e3b9955f77022fe1f4"} Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.707979 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.707992 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" event={"ID":"00caf547-24eb-4e92-9294-900fbf53f068","Type":"ContainerStarted","Data":"05a47e4b5a86acb3b9c56cd359ee7dc945e17c7664e19c02ba2c1d87139844d4"} Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.708005 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" event={"ID":"7a4111f6-9632-4997-a71b-a514f200b5cf","Type":"ContainerStarted","Data":"788f04aa713e3832addbb114f81f67469e1e56ca892219151d098247588b40b8"} Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.708018 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.708030 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.708040 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" event={"ID":"60c3c9e4-042b-445f-96fe-7d4583ae29ee","Type":"ContainerStarted","Data":"6dbba38ace3a53dc332b5b6dead348ce5dd61eb4311a27ef5669201041b97349"} Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.708053 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.708064 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.722660 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" podStartSLOduration=3.949056127 podStartE2EDuration="46.722645658s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.816022399 +0000 UTC m=+798.385050389" lastFinishedPulling="2025-12-08 09:28:37.58961193 +0000 UTC m=+841.158639920" observedRunningTime="2025-12-08 09:28:38.720086929 +0000 UTC m=+842.289114919" watchObservedRunningTime="2025-12-08 09:28:38.722645658 +0000 UTC m=+842.291673648" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.749128 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" podStartSLOduration=4.021168949 podStartE2EDuration="46.749107524s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.814375295 +0000 UTC m=+798.383403285" lastFinishedPulling="2025-12-08 09:28:37.54231387 +0000 UTC m=+841.111341860" observedRunningTime="2025-12-08 09:28:38.74488194 +0000 UTC m=+842.313909940" watchObservedRunningTime="2025-12-08 09:28:38.749107524 +0000 UTC m=+842.318135514" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.765394 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" podStartSLOduration=4.309153861 podStartE2EDuration="46.765371134s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:55.083782184 +0000 UTC m=+798.652810174" lastFinishedPulling="2025-12-08 09:28:37.539999457 +0000 UTC m=+841.109027447" observedRunningTime="2025-12-08 09:28:38.761284723 +0000 UTC m=+842.330312713" watchObservedRunningTime="2025-12-08 09:28:38.765371134 +0000 UTC m=+842.334399124" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.782625 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" podStartSLOduration=4.095015377 podStartE2EDuration="46.78261147s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.789612795 +0000 UTC m=+798.358640785" lastFinishedPulling="2025-12-08 09:28:37.477208888 +0000 UTC m=+841.046236878" observedRunningTime="2025-12-08 09:28:38.779897077 +0000 UTC m=+842.348925067" watchObservedRunningTime="2025-12-08 09:28:38.78261147 +0000 UTC m=+842.351639460" Dec 08 09:28:38 crc kubenswrapper[4662]: I1208 09:28:38.804472 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" podStartSLOduration=3.9348312930000002 podStartE2EDuration="46.804457822s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.81936531 +0000 UTC m=+798.388393300" lastFinishedPulling="2025-12-08 09:28:37.688991839 +0000 UTC m=+841.258019829" observedRunningTime="2025-12-08 09:28:38.800686879 +0000 UTC m=+842.369714869" watchObservedRunningTime="2025-12-08 09:28:38.804457822 +0000 UTC m=+842.373485812" Dec 08 09:28:42 crc kubenswrapper[4662]: I1208 09:28:42.671491 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7d9dfd778-pnmbz" Dec 08 09:28:42 crc kubenswrapper[4662]: I1208 09:28:42.779059 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-697fb699cf-wr5zg" Dec 08 09:28:42 crc kubenswrapper[4662]: I1208 09:28:42.779851 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-5697bb5779-hrlb9" Dec 08 09:28:42 crc kubenswrapper[4662]: I1208 09:28:42.841848 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5f64f6f8bb-zsdxn" Dec 08 09:28:42 crc kubenswrapper[4662]: I1208 09:28:42.945647 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-68c6d99b8f-4kwdx" Dec 08 09:28:42 crc kubenswrapper[4662]: I1208 09:28:42.967956 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-967d97867-85425" Dec 08 09:28:43 crc kubenswrapper[4662]: I1208 09:28:43.290360 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-79c8c4686c-hhz55" Dec 08 09:28:43 crc kubenswrapper[4662]: I1208 09:28:43.290853 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-5b5fd79c9c-jgbb9" Dec 08 09:28:43 crc kubenswrapper[4662]: I1208 09:28:43.353787 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7765d96ddf-8zz9f" Dec 08 09:28:43 crc kubenswrapper[4662]: I1208 09:28:43.567326 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-697bc559fc-qtdrq" Dec 08 09:28:43 crc kubenswrapper[4662]: E1208 09:28:43.699778 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:d29650b006da97eb9178fcc58f2eb9fead8c2b414fac18f86a3c3a1507488c4f\\\"\"" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" podUID="92ae4959-7652-4be1-9349-d1a1fbb32d68" Dec 08 09:28:43 crc kubenswrapper[4662]: I1208 09:28:43.791534 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-b6456fdb6-8hlzg" Dec 08 09:28:44 crc kubenswrapper[4662]: I1208 09:28:44.132786 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9d58d64bc-kh2rv" Dec 08 09:28:44 crc kubenswrapper[4662]: I1208 09:28:44.177635 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-58d5ff84df-jnfw7" Dec 08 09:28:45 crc kubenswrapper[4662]: E1208 09:28:45.699086 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:0b3fb69f35c151895d3dffd514974a9f9fe1c77c3bca69b78b81efb183cf4557\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" podUID="edef4f76-66e3-4431-8f3e-15b1be7dc525" Dec 08 09:28:46 crc kubenswrapper[4662]: I1208 09:28:46.788136 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" event={"ID":"bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2","Type":"ContainerStarted","Data":"89ef9648a51d2f54288e9e6e58d518cee037ae967e77fdc2dd37c9484da7fb55"} Dec 08 09:28:46 crc kubenswrapper[4662]: I1208 09:28:46.789209 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" Dec 08 09:28:46 crc kubenswrapper[4662]: I1208 09:28:46.809804 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" podStartSLOduration=3.804267569 podStartE2EDuration="54.80978042s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:55.119973244 +0000 UTC m=+798.689001234" lastFinishedPulling="2025-12-08 09:28:46.125486095 +0000 UTC m=+849.694514085" observedRunningTime="2025-12-08 09:28:46.808407963 +0000 UTC m=+850.377435953" watchObservedRunningTime="2025-12-08 09:28:46.80978042 +0000 UTC m=+850.378808430" Dec 08 09:28:47 crc kubenswrapper[4662]: I1208 09:28:47.799569 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" event={"ID":"19e2da99-64f8-48f3-974b-5a33bdbe683d","Type":"ContainerStarted","Data":"8bbb083a94a6f3a6a664f6fe7508b59caf4b0c1243636ea55205df0c3646f9e9"} Dec 08 09:28:47 crc kubenswrapper[4662]: I1208 09:28:47.799944 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" Dec 08 09:28:47 crc kubenswrapper[4662]: I1208 09:28:47.821407 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" podStartSLOduration=2.784589045 podStartE2EDuration="54.821387761s" podCreationTimestamp="2025-12-08 09:27:53 +0000 UTC" firstStartedPulling="2025-12-08 09:27:55.129231974 +0000 UTC m=+798.698259964" lastFinishedPulling="2025-12-08 09:28:47.16603069 +0000 UTC m=+850.735058680" observedRunningTime="2025-12-08 09:28:47.816848078 +0000 UTC m=+851.385876078" watchObservedRunningTime="2025-12-08 09:28:47.821387761 +0000 UTC m=+851.390415761" Dec 08 09:28:48 crc kubenswrapper[4662]: I1208 09:28:48.789791 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-78d48bff9d-wgcnt" Dec 08 09:28:49 crc kubenswrapper[4662]: I1208 09:28:49.767257 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-84b575879fsnfnd" Dec 08 09:28:54 crc kubenswrapper[4662]: I1208 09:28:54.157507 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-5854674fcc-knpnl" Dec 08 09:28:54 crc kubenswrapper[4662]: I1208 09:28:54.209987 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-667bd8d554-6xkzt" Dec 08 09:28:59 crc kubenswrapper[4662]: I1208 09:28:59.706853 4662 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 09:28:59 crc kubenswrapper[4662]: I1208 09:28:59.910309 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" event={"ID":"92ae4959-7652-4be1-9349-d1a1fbb32d68","Type":"ContainerStarted","Data":"ba64532e63ccc30e32012d50a2746fcbe8e704c30f75eca8a5a902b8970b17b0"} Dec 08 09:28:59 crc kubenswrapper[4662]: I1208 09:28:59.910564 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" Dec 08 09:28:59 crc kubenswrapper[4662]: I1208 09:28:59.932838 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" podStartSLOduration=3.119646329 podStartE2EDuration="1m7.932822608s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.879548618 +0000 UTC m=+798.448576608" lastFinishedPulling="2025-12-08 09:28:59.692724897 +0000 UTC m=+863.261752887" observedRunningTime="2025-12-08 09:28:59.923189356 +0000 UTC m=+863.492217366" watchObservedRunningTime="2025-12-08 09:28:59.932822608 +0000 UTC m=+863.501850598" Dec 08 09:29:00 crc kubenswrapper[4662]: I1208 09:29:00.918833 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" event={"ID":"edef4f76-66e3-4431-8f3e-15b1be7dc525","Type":"ContainerStarted","Data":"a4068b70a41ea462e7eda18e496eb3f0b1935f67718f8c7e57fcf636aaaa6472"} Dec 08 09:29:00 crc kubenswrapper[4662]: I1208 09:29:00.919322 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" Dec 08 09:29:00 crc kubenswrapper[4662]: I1208 09:29:00.940514 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" podStartSLOduration=3.601870606 podStartE2EDuration="1m8.940491552s" podCreationTimestamp="2025-12-08 09:27:52 +0000 UTC" firstStartedPulling="2025-12-08 09:27:54.859693031 +0000 UTC m=+798.428721021" lastFinishedPulling="2025-12-08 09:29:00.198313977 +0000 UTC m=+863.767341967" observedRunningTime="2025-12-08 09:29:00.934857309 +0000 UTC m=+864.503885299" watchObservedRunningTime="2025-12-08 09:29:00.940491552 +0000 UTC m=+864.509519552" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.641042 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tncgb"] Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.644452 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.683710 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tncgb"] Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.765581 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-catalog-content\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.766036 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-utilities\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.766222 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srcrg\" (UniqueName: \"kubernetes.io/projected/c07ec2b5-0682-4159-96f6-0308a8752c86-kube-api-access-srcrg\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.867876 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-utilities\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.867937 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-srcrg\" (UniqueName: \"kubernetes.io/projected/c07ec2b5-0682-4159-96f6-0308a8752c86-kube-api-access-srcrg\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.867999 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-catalog-content\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.868504 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-utilities\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.868606 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-catalog-content\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.890687 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-srcrg\" (UniqueName: \"kubernetes.io/projected/c07ec2b5-0682-4159-96f6-0308a8752c86-kube-api-access-srcrg\") pod \"redhat-operators-tncgb\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:04 crc kubenswrapper[4662]: I1208 09:29:04.973243 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:05 crc kubenswrapper[4662]: I1208 09:29:05.456031 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tncgb"] Dec 08 09:29:05 crc kubenswrapper[4662]: I1208 09:29:05.952864 4662 generic.go:334] "Generic (PLEG): container finished" podID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerID="2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962" exitCode=0 Dec 08 09:29:05 crc kubenswrapper[4662]: I1208 09:29:05.952917 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tncgb" event={"ID":"c07ec2b5-0682-4159-96f6-0308a8752c86","Type":"ContainerDied","Data":"2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962"} Dec 08 09:29:05 crc kubenswrapper[4662]: I1208 09:29:05.952946 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tncgb" event={"ID":"c07ec2b5-0682-4159-96f6-0308a8752c86","Type":"ContainerStarted","Data":"e9c8f3e568a08de55753ef8c8bc8fd5e94bba444dc81830bfde0c77cdd27ba35"} Dec 08 09:29:06 crc kubenswrapper[4662]: I1208 09:29:06.961116 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tncgb" event={"ID":"c07ec2b5-0682-4159-96f6-0308a8752c86","Type":"ContainerStarted","Data":"4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466"} Dec 08 09:29:08 crc kubenswrapper[4662]: I1208 09:29:08.978140 4662 generic.go:334] "Generic (PLEG): container finished" podID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerID="4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466" exitCode=0 Dec 08 09:29:08 crc kubenswrapper[4662]: I1208 09:29:08.978264 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tncgb" event={"ID":"c07ec2b5-0682-4159-96f6-0308a8752c86","Type":"ContainerDied","Data":"4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466"} Dec 08 09:29:09 crc kubenswrapper[4662]: I1208 09:29:09.989866 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tncgb" event={"ID":"c07ec2b5-0682-4159-96f6-0308a8752c86","Type":"ContainerStarted","Data":"92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421"} Dec 08 09:29:13 crc kubenswrapper[4662]: I1208 09:29:13.526073 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-5fdfd5b6b5-k7nhg" Dec 08 09:29:13 crc kubenswrapper[4662]: I1208 09:29:13.545958 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tncgb" podStartSLOduration=6.146839057 podStartE2EDuration="9.545932916s" podCreationTimestamp="2025-12-08 09:29:04 +0000 UTC" firstStartedPulling="2025-12-08 09:29:05.954421318 +0000 UTC m=+869.523449308" lastFinishedPulling="2025-12-08 09:29:09.353515187 +0000 UTC m=+872.922543167" observedRunningTime="2025-12-08 09:29:10.012041332 +0000 UTC m=+873.581069322" watchObservedRunningTime="2025-12-08 09:29:13.545932916 +0000 UTC m=+877.114960926" Dec 08 09:29:13 crc kubenswrapper[4662]: I1208 09:29:13.815851 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-78f8948974-v5zsp" Dec 08 09:29:14 crc kubenswrapper[4662]: I1208 09:29:14.973391 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:14 crc kubenswrapper[4662]: I1208 09:29:14.973455 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:15 crc kubenswrapper[4662]: I1208 09:29:15.036614 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:15 crc kubenswrapper[4662]: I1208 09:29:15.109819 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:15 crc kubenswrapper[4662]: I1208 09:29:15.278371 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tncgb"] Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.038352 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tncgb" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="registry-server" containerID="cri-o://92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421" gracePeriod=2 Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.415448 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.559759 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-catalog-content\") pod \"c07ec2b5-0682-4159-96f6-0308a8752c86\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.559940 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-utilities\") pod \"c07ec2b5-0682-4159-96f6-0308a8752c86\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.560121 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srcrg\" (UniqueName: \"kubernetes.io/projected/c07ec2b5-0682-4159-96f6-0308a8752c86-kube-api-access-srcrg\") pod \"c07ec2b5-0682-4159-96f6-0308a8752c86\" (UID: \"c07ec2b5-0682-4159-96f6-0308a8752c86\") " Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.560573 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-utilities" (OuterVolumeSpecName: "utilities") pod "c07ec2b5-0682-4159-96f6-0308a8752c86" (UID: "c07ec2b5-0682-4159-96f6-0308a8752c86"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.564512 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c07ec2b5-0682-4159-96f6-0308a8752c86-kube-api-access-srcrg" (OuterVolumeSpecName: "kube-api-access-srcrg") pod "c07ec2b5-0682-4159-96f6-0308a8752c86" (UID: "c07ec2b5-0682-4159-96f6-0308a8752c86"). InnerVolumeSpecName "kube-api-access-srcrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.661898 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.661931 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-srcrg\" (UniqueName: \"kubernetes.io/projected/c07ec2b5-0682-4159-96f6-0308a8752c86-kube-api-access-srcrg\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.680091 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c07ec2b5-0682-4159-96f6-0308a8752c86" (UID: "c07ec2b5-0682-4159-96f6-0308a8752c86"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:29:17 crc kubenswrapper[4662]: I1208 09:29:17.763493 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c07ec2b5-0682-4159-96f6-0308a8752c86-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.049701 4662 generic.go:334] "Generic (PLEG): container finished" podID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerID="92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421" exitCode=0 Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.049793 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tncgb" event={"ID":"c07ec2b5-0682-4159-96f6-0308a8752c86","Type":"ContainerDied","Data":"92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421"} Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.049831 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tncgb" event={"ID":"c07ec2b5-0682-4159-96f6-0308a8752c86","Type":"ContainerDied","Data":"e9c8f3e568a08de55753ef8c8bc8fd5e94bba444dc81830bfde0c77cdd27ba35"} Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.049853 4662 scope.go:117] "RemoveContainer" containerID="92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.049986 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tncgb" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.077859 4662 scope.go:117] "RemoveContainer" containerID="4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.103059 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tncgb"] Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.106872 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tncgb"] Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.123136 4662 scope.go:117] "RemoveContainer" containerID="2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.149388 4662 scope.go:117] "RemoveContainer" containerID="92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421" Dec 08 09:29:18 crc kubenswrapper[4662]: E1208 09:29:18.149887 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421\": container with ID starting with 92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421 not found: ID does not exist" containerID="92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.150959 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421"} err="failed to get container status \"92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421\": rpc error: code = NotFound desc = could not find container \"92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421\": container with ID starting with 92dffb4ef6244876b67c885af92b8c1df495daaf41ab13a31337f22a4ae60421 not found: ID does not exist" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.151033 4662 scope.go:117] "RemoveContainer" containerID="4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466" Dec 08 09:29:18 crc kubenswrapper[4662]: E1208 09:29:18.151670 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466\": container with ID starting with 4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466 not found: ID does not exist" containerID="4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.151837 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466"} err="failed to get container status \"4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466\": rpc error: code = NotFound desc = could not find container \"4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466\": container with ID starting with 4f85f0472b413702599c0a923b246e66fa6a688a29ba98b8b1c0d6b145575466 not found: ID does not exist" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.151941 4662 scope.go:117] "RemoveContainer" containerID="2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962" Dec 08 09:29:18 crc kubenswrapper[4662]: E1208 09:29:18.152317 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962\": container with ID starting with 2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962 not found: ID does not exist" containerID="2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.152343 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962"} err="failed to get container status \"2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962\": rpc error: code = NotFound desc = could not find container \"2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962\": container with ID starting with 2283d828426206d3f93102cdaa7ea274b36239e592519e2b171596227d4b3962 not found: ID does not exist" Dec 08 09:29:18 crc kubenswrapper[4662]: E1208 09:29:18.154937 4662 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc07ec2b5_0682_4159_96f6_0308a8752c86.slice\": RecentStats: unable to find data in memory cache]" Dec 08 09:29:18 crc kubenswrapper[4662]: I1208 09:29:18.706634 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" path="/var/lib/kubelet/pods/c07ec2b5-0682-4159-96f6-0308a8752c86/volumes" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.684640 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2thnx"] Dec 08 09:29:20 crc kubenswrapper[4662]: E1208 09:29:20.685261 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="registry-server" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.685272 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="registry-server" Dec 08 09:29:20 crc kubenswrapper[4662]: E1208 09:29:20.685281 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="extract-content" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.685287 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="extract-content" Dec 08 09:29:20 crc kubenswrapper[4662]: E1208 09:29:20.685297 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="extract-utilities" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.685303 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="extract-utilities" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.685424 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c07ec2b5-0682-4159-96f6-0308a8752c86" containerName="registry-server" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.687805 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.745177 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2thnx"] Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.833897 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-utilities\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.834436 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlncm\" (UniqueName: \"kubernetes.io/projected/f2cb0463-6bcf-4dee-8f77-2bb483349cac-kube-api-access-tlncm\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.834621 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-catalog-content\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.935815 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlncm\" (UniqueName: \"kubernetes.io/projected/f2cb0463-6bcf-4dee-8f77-2bb483349cac-kube-api-access-tlncm\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.935906 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-catalog-content\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.935952 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-utilities\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.936344 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-utilities\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.936388 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-catalog-content\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:20 crc kubenswrapper[4662]: I1208 09:29:20.955695 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlncm\" (UniqueName: \"kubernetes.io/projected/f2cb0463-6bcf-4dee-8f77-2bb483349cac-kube-api-access-tlncm\") pod \"redhat-marketplace-2thnx\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:21 crc kubenswrapper[4662]: I1208 09:29:21.042423 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:21 crc kubenswrapper[4662]: I1208 09:29:21.492466 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2thnx"] Dec 08 09:29:21 crc kubenswrapper[4662]: W1208 09:29:21.495098 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2cb0463_6bcf_4dee_8f77_2bb483349cac.slice/crio-35bc216f6eb5fe6f0189fd2612b427ed2d4327591d2fd1a9b7dc06b695799684 WatchSource:0}: Error finding container 35bc216f6eb5fe6f0189fd2612b427ed2d4327591d2fd1a9b7dc06b695799684: Status 404 returned error can't find the container with id 35bc216f6eb5fe6f0189fd2612b427ed2d4327591d2fd1a9b7dc06b695799684 Dec 08 09:29:22 crc kubenswrapper[4662]: I1208 09:29:22.093934 4662 generic.go:334] "Generic (PLEG): container finished" podID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerID="73ffeab07ea0c787608b09517b79ff70ee4837706e3a0fbd3fb5d0df8bfb1781" exitCode=0 Dec 08 09:29:22 crc kubenswrapper[4662]: I1208 09:29:22.095144 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2thnx" event={"ID":"f2cb0463-6bcf-4dee-8f77-2bb483349cac","Type":"ContainerDied","Data":"73ffeab07ea0c787608b09517b79ff70ee4837706e3a0fbd3fb5d0df8bfb1781"} Dec 08 09:29:22 crc kubenswrapper[4662]: I1208 09:29:22.095172 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2thnx" event={"ID":"f2cb0463-6bcf-4dee-8f77-2bb483349cac","Type":"ContainerStarted","Data":"35bc216f6eb5fe6f0189fd2612b427ed2d4327591d2fd1a9b7dc06b695799684"} Dec 08 09:29:23 crc kubenswrapper[4662]: I1208 09:29:23.103641 4662 generic.go:334] "Generic (PLEG): container finished" podID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerID="8879b2080833d3d6e30f64ed70d7e6319fa6eb1014f07e8a141c38dbb66d915d" exitCode=0 Dec 08 09:29:23 crc kubenswrapper[4662]: I1208 09:29:23.103897 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2thnx" event={"ID":"f2cb0463-6bcf-4dee-8f77-2bb483349cac","Type":"ContainerDied","Data":"8879b2080833d3d6e30f64ed70d7e6319fa6eb1014f07e8a141c38dbb66d915d"} Dec 08 09:29:24 crc kubenswrapper[4662]: I1208 09:29:24.111138 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2thnx" event={"ID":"f2cb0463-6bcf-4dee-8f77-2bb483349cac","Type":"ContainerStarted","Data":"0c2db9cc9251bc204e144f1e27025933071bd7e533af6ca8e35b4bbe0ab4f4fe"} Dec 08 09:29:24 crc kubenswrapper[4662]: I1208 09:29:24.126931 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2thnx" podStartSLOduration=2.657702626 podStartE2EDuration="4.126914475s" podCreationTimestamp="2025-12-08 09:29:20 +0000 UTC" firstStartedPulling="2025-12-08 09:29:22.095998666 +0000 UTC m=+885.665026666" lastFinishedPulling="2025-12-08 09:29:23.565210525 +0000 UTC m=+887.134238515" observedRunningTime="2025-12-08 09:29:24.125825876 +0000 UTC m=+887.694853866" watchObservedRunningTime="2025-12-08 09:29:24.126914475 +0000 UTC m=+887.695942465" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.272729 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-txtq9"] Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.274176 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.277934 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.278162 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.280572 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.280769 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-l66wv" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.305098 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-txtq9"] Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.423003 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpxkv\" (UniqueName: \"kubernetes.io/projected/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-kube-api-access-bpxkv\") pod \"dnsmasq-dns-675f4bcbfc-txtq9\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.423288 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-config\") pod \"dnsmasq-dns-675f4bcbfc-txtq9\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.467545 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-86m2d"] Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.468946 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.471600 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.490190 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-86m2d"] Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.532052 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-config\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.532110 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.532135 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lh8v\" (UniqueName: \"kubernetes.io/projected/16dac067-d9fd-42c0-ae95-8d118e6f5cba-kube-api-access-4lh8v\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.532179 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpxkv\" (UniqueName: \"kubernetes.io/projected/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-kube-api-access-bpxkv\") pod \"dnsmasq-dns-675f4bcbfc-txtq9\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.532247 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-config\") pod \"dnsmasq-dns-675f4bcbfc-txtq9\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.533229 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-config\") pod \"dnsmasq-dns-675f4bcbfc-txtq9\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.586695 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpxkv\" (UniqueName: \"kubernetes.io/projected/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-kube-api-access-bpxkv\") pod \"dnsmasq-dns-675f4bcbfc-txtq9\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.600470 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.635087 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-config\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.635130 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lh8v\" (UniqueName: \"kubernetes.io/projected/16dac067-d9fd-42c0-ae95-8d118e6f5cba-kube-api-access-4lh8v\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.635149 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.635867 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-config\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.639387 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.658107 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lh8v\" (UniqueName: \"kubernetes.io/projected/16dac067-d9fd-42c0-ae95-8d118e6f5cba-kube-api-access-4lh8v\") pod \"dnsmasq-dns-78dd6ddcc-86m2d\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:27 crc kubenswrapper[4662]: I1208 09:29:27.784245 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:28 crc kubenswrapper[4662]: I1208 09:29:28.078481 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-txtq9"] Dec 08 09:29:28 crc kubenswrapper[4662]: I1208 09:29:28.140792 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" event={"ID":"e49b4397-c667-4a8f-b5c1-c33071fcb8f0","Type":"ContainerStarted","Data":"870b457ed6d919bed4324e0856f929f322ed6bb539899c2ef3c50351649ca814"} Dec 08 09:29:28 crc kubenswrapper[4662]: I1208 09:29:28.294733 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-86m2d"] Dec 08 09:29:28 crc kubenswrapper[4662]: W1208 09:29:28.295936 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod16dac067_d9fd_42c0_ae95_8d118e6f5cba.slice/crio-e7c575b871b1d7d41a90c874ac6635c1b599e6f8b52a57f9be69ed5f6c58e78a WatchSource:0}: Error finding container e7c575b871b1d7d41a90c874ac6635c1b599e6f8b52a57f9be69ed5f6c58e78a: Status 404 returned error can't find the container with id e7c575b871b1d7d41a90c874ac6635c1b599e6f8b52a57f9be69ed5f6c58e78a Dec 08 09:29:29 crc kubenswrapper[4662]: I1208 09:29:29.150519 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" event={"ID":"16dac067-d9fd-42c0-ae95-8d118e6f5cba","Type":"ContainerStarted","Data":"e7c575b871b1d7d41a90c874ac6635c1b599e6f8b52a57f9be69ed5f6c58e78a"} Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.517776 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-txtq9"] Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.547268 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jzhfn"] Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.548569 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.563955 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jzhfn"] Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.578486 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.578560 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-config\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.578616 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj5s2\" (UniqueName: \"kubernetes.io/projected/ad5b733a-89ba-4494-afd9-6994f402db46-kube-api-access-sj5s2\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.681877 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-config\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.681985 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sj5s2\" (UniqueName: \"kubernetes.io/projected/ad5b733a-89ba-4494-afd9-6994f402db46-kube-api-access-sj5s2\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.682084 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.683166 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-dns-svc\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.683835 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-config\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.723425 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sj5s2\" (UniqueName: \"kubernetes.io/projected/ad5b733a-89ba-4494-afd9-6994f402db46-kube-api-access-sj5s2\") pod \"dnsmasq-dns-666b6646f7-jzhfn\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.852857 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-86m2d"] Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.881961 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.887409 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jjmkp"] Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.888623 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.960688 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jjmkp"] Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.990681 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmgbk\" (UniqueName: \"kubernetes.io/projected/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-kube-api-access-vmgbk\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.990792 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-config\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:30 crc kubenswrapper[4662]: I1208 09:29:30.990893 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.043300 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.048342 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.092774 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmgbk\" (UniqueName: \"kubernetes.io/projected/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-kube-api-access-vmgbk\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.092814 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-config\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.092850 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.093870 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.095197 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-config\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.123586 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmgbk\" (UniqueName: \"kubernetes.io/projected/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-kube-api-access-vmgbk\") pod \"dnsmasq-dns-57d769cc4f-jjmkp\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.151551 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.208313 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.370021 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.430474 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2thnx"] Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.493955 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jzhfn"] Dec 08 09:29:31 crc kubenswrapper[4662]: W1208 09:29:31.547427 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad5b733a_89ba_4494_afd9_6994f402db46.slice/crio-3e8d225ed6bb0f37b49beaf86809021c2ba8a30ea54a8e73ea7269faf8bb585c WatchSource:0}: Error finding container 3e8d225ed6bb0f37b49beaf86809021c2ba8a30ea54a8e73ea7269faf8bb585c: Status 404 returned error can't find the container with id 3e8d225ed6bb0f37b49beaf86809021c2ba8a30ea54a8e73ea7269faf8bb585c Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.707281 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.709176 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.711927 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.712258 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mrd6c" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.713837 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.714336 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.714381 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.714654 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.714915 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.715190 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.807814 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808259 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808287 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9b3e5a2-0303-435d-9bd7-763b2f802e46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808320 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808336 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808387 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808435 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808456 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xzx8\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-kube-api-access-6xzx8\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808516 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808542 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.808576 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9b3e5a2-0303-435d-9bd7-763b2f802e46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.817912 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jjmkp"] Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910444 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910502 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9b3e5a2-0303-435d-9bd7-763b2f802e46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910538 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910554 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910575 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9b3e5a2-0303-435d-9bd7-763b2f802e46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910608 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910624 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910648 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910681 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910699 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xzx8\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-kube-api-access-6xzx8\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.910727 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.911049 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.913511 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.914490 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-config-data\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.915595 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.917589 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.918180 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.925031 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.925192 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.925256 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9b3e5a2-0303-435d-9bd7-763b2f802e46-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.930996 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9b3e5a2-0303-435d-9bd7-763b2f802e46-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.931252 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xzx8\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-kube-api-access-6xzx8\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:31 crc kubenswrapper[4662]: I1208 09:29:31.944520 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " pod="openstack/rabbitmq-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.047260 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.051540 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.055173 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.056237 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.056485 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.056679 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.056934 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.057182 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vqthz" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.057370 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.057540 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.062285 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.201297 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" event={"ID":"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a","Type":"ContainerStarted","Data":"05526583be2314597653ea73b0eb7c79f8fcbc2abbc6a43540a37e480db7ea1d"} Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.214364 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" event={"ID":"ad5b733a-89ba-4494-afd9-6994f402db46","Type":"ContainerStarted","Data":"3e8d225ed6bb0f37b49beaf86809021c2ba8a30ea54a8e73ea7269faf8bb585c"} Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.217168 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.217480 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.217516 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9f9be7d-4423-489a-a794-e022a83c9e51-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218529 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218561 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218610 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7vsj\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-kube-api-access-c7vsj\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218638 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218654 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218696 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218722 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.218773 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9f9be7d-4423-489a-a794-e022a83c9e51-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.319670 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7vsj\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-kube-api-access-c7vsj\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.319947 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.319965 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320004 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320023 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320041 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9f9be7d-4423-489a-a794-e022a83c9e51-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320071 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320101 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320141 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9f9be7d-4423-489a-a794-e022a83c9e51-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320171 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320195 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320417 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.320765 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.332827 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.333113 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.334115 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.334252 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.337690 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.337916 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.337970 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9f9be7d-4423-489a-a794-e022a83c9e51-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.339589 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9f9be7d-4423-489a-a794-e022a83c9e51-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.341920 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7vsj\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-kube-api-access-c7vsj\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.377193 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.602831 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.620588 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.620652 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:29:32 crc kubenswrapper[4662]: W1208 09:29:32.661775 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9b3e5a2_0303_435d_9bd7_763b2f802e46.slice/crio-d660f1b002f72f91e7de2d42cfe5ba64bc29b76b0b495b011d2812a8fc84a135 WatchSource:0}: Error finding container d660f1b002f72f91e7de2d42cfe5ba64bc29b76b0b495b011d2812a8fc84a135: Status 404 returned error can't find the container with id d660f1b002f72f91e7de2d42cfe5ba64bc29b76b0b495b011d2812a8fc84a135 Dec 08 09:29:32 crc kubenswrapper[4662]: I1208 09:29:32.708647 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.228331 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9b3e5a2-0303-435d-9bd7-763b2f802e46","Type":"ContainerStarted","Data":"d660f1b002f72f91e7de2d42cfe5ba64bc29b76b0b495b011d2812a8fc84a135"} Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.228564 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2thnx" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="registry-server" containerID="cri-o://0c2db9cc9251bc204e144f1e27025933071bd7e533af6ca8e35b4bbe0ab4f4fe" gracePeriod=2 Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.568256 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.569602 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.572515 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.572542 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.572802 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.572911 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-4bzwg" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.585947 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.596182 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653305 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zth26\" (UniqueName: \"kubernetes.io/projected/0b3100ca-3241-444e-b279-248592e848fe-kube-api-access-zth26\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653353 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653381 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653572 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653648 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3100ca-3241-444e-b279-248592e848fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653686 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b3100ca-3241-444e-b279-248592e848fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653838 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3100ca-3241-444e-b279-248592e848fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.653860 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755313 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755391 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755430 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755474 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b3100ca-3241-444e-b279-248592e848fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755492 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3100ca-3241-444e-b279-248592e848fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755545 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3100ca-3241-444e-b279-248592e848fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755634 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.755726 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zth26\" (UniqueName: \"kubernetes.io/projected/0b3100ca-3241-444e-b279-248592e848fe-kube-api-access-zth26\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.756441 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/0b3100ca-3241-444e-b279-248592e848fe-config-data-generated\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.757946 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.758494 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-kolla-config\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.758882 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-config-data-default\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.760537 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0b3100ca-3241-444e-b279-248592e848fe-operator-scripts\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.766320 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b3100ca-3241-444e-b279-248592e848fe-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.783343 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zth26\" (UniqueName: \"kubernetes.io/projected/0b3100ca-3241-444e-b279-248592e848fe-kube-api-access-zth26\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.791110 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b3100ca-3241-444e-b279-248592e848fe-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.797695 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-galera-0\" (UID: \"0b3100ca-3241-444e-b279-248592e848fe\") " pod="openstack/openstack-galera-0" Dec 08 09:29:33 crc kubenswrapper[4662]: I1208 09:29:33.946227 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.223266 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:29:34 crc kubenswrapper[4662]: W1208 09:29:34.281175 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9f9be7d_4423_489a_a794_e022a83c9e51.slice/crio-55959e8f26bf65f85d02094560b3ce9cb44bf02946cea6ca77b5d2ecd81b94e7 WatchSource:0}: Error finding container 55959e8f26bf65f85d02094560b3ce9cb44bf02946cea6ca77b5d2ecd81b94e7: Status 404 returned error can't find the container with id 55959e8f26bf65f85d02094560b3ce9cb44bf02946cea6ca77b5d2ecd81b94e7 Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.288183 4662 generic.go:334] "Generic (PLEG): container finished" podID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerID="0c2db9cc9251bc204e144f1e27025933071bd7e533af6ca8e35b4bbe0ab4f4fe" exitCode=0 Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.288220 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2thnx" event={"ID":"f2cb0463-6bcf-4dee-8f77-2bb483349cac","Type":"ContainerDied","Data":"0c2db9cc9251bc204e144f1e27025933071bd7e533af6ca8e35b4bbe0ab4f4fe"} Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.584410 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.838319 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.839669 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.848894 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-zkqpg" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.849130 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.849988 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.850128 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.879049 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977078 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06c22d2-e96f-445d-82d6-54f276df38c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977166 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977199 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977216 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977246 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkrc6\" (UniqueName: \"kubernetes.io/projected/b06c22d2-e96f-445d-82d6-54f276df38c8-kube-api-access-qkrc6\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977324 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b06c22d2-e96f-445d-82d6-54f276df38c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977345 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06c22d2-e96f-445d-82d6-54f276df38c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:34 crc kubenswrapper[4662]: I1208 09:29:34.977391 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080713 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06c22d2-e96f-445d-82d6-54f276df38c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080822 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080856 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080874 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080902 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkrc6\" (UniqueName: \"kubernetes.io/projected/b06c22d2-e96f-445d-82d6-54f276df38c8-kube-api-access-qkrc6\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080933 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b06c22d2-e96f-445d-82d6-54f276df38c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080951 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06c22d2-e96f-445d-82d6-54f276df38c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.080969 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.081820 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.083208 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.083482 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.094852 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b06c22d2-e96f-445d-82d6-54f276df38c8-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.095099 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b06c22d2-e96f-445d-82d6-54f276df38c8-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.105886 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b06c22d2-e96f-445d-82d6-54f276df38c8-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.112927 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b06c22d2-e96f-445d-82d6-54f276df38c8-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.129973 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.140121 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkrc6\" (UniqueName: \"kubernetes.io/projected/b06c22d2-e96f-445d-82d6-54f276df38c8-kube-api-access-qkrc6\") pod \"openstack-cell1-galera-0\" (UID: \"b06c22d2-e96f-445d-82d6-54f276df38c8\") " pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.171084 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.274613 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.275802 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.287918 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.288047 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.289470 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-4l6wg" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.310850 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.317162 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9f9be7d-4423-489a-a794-e022a83c9e51","Type":"ContainerStarted","Data":"55959e8f26bf65f85d02094560b3ce9cb44bf02946cea6ca77b5d2ecd81b94e7"} Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.391010 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0740d08e-8e81-4133-9969-7b777cfef0f7-config-data\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.391112 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740d08e-8e81-4133-9969-7b777cfef0f7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.391155 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqsdm\" (UniqueName: \"kubernetes.io/projected/0740d08e-8e81-4133-9969-7b777cfef0f7-kube-api-access-jqsdm\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.391204 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740d08e-8e81-4133-9969-7b777cfef0f7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.391298 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0740d08e-8e81-4133-9969-7b777cfef0f7-kolla-config\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.493238 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740d08e-8e81-4133-9969-7b777cfef0f7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.493305 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqsdm\" (UniqueName: \"kubernetes.io/projected/0740d08e-8e81-4133-9969-7b777cfef0f7-kube-api-access-jqsdm\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.493342 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740d08e-8e81-4133-9969-7b777cfef0f7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.493385 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0740d08e-8e81-4133-9969-7b777cfef0f7-kolla-config\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.493415 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0740d08e-8e81-4133-9969-7b777cfef0f7-config-data\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.494615 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0740d08e-8e81-4133-9969-7b777cfef0f7-config-data\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.496621 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0740d08e-8e81-4133-9969-7b777cfef0f7-kolla-config\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.516989 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0740d08e-8e81-4133-9969-7b777cfef0f7-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.520086 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0740d08e-8e81-4133-9969-7b777cfef0f7-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.535314 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqsdm\" (UniqueName: \"kubernetes.io/projected/0740d08e-8e81-4133-9969-7b777cfef0f7-kube-api-access-jqsdm\") pod \"memcached-0\" (UID: \"0740d08e-8e81-4133-9969-7b777cfef0f7\") " pod="openstack/memcached-0" Dec 08 09:29:35 crc kubenswrapper[4662]: I1208 09:29:35.601058 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Dec 08 09:29:36 crc kubenswrapper[4662]: I1208 09:29:36.810968 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:29:36 crc kubenswrapper[4662]: I1208 09:29:36.811950 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 08 09:29:36 crc kubenswrapper[4662]: I1208 09:29:36.818200 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-98jld" Dec 08 09:29:36 crc kubenswrapper[4662]: I1208 09:29:36.821430 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sxwm\" (UniqueName: \"kubernetes.io/projected/05001747-2338-4f97-8f09-68b9541f94e8-kube-api-access-6sxwm\") pod \"kube-state-metrics-0\" (UID: \"05001747-2338-4f97-8f09-68b9541f94e8\") " pod="openstack/kube-state-metrics-0" Dec 08 09:29:36 crc kubenswrapper[4662]: I1208 09:29:36.839839 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:29:36 crc kubenswrapper[4662]: I1208 09:29:36.923598 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sxwm\" (UniqueName: \"kubernetes.io/projected/05001747-2338-4f97-8f09-68b9541f94e8-kube-api-access-6sxwm\") pod \"kube-state-metrics-0\" (UID: \"05001747-2338-4f97-8f09-68b9541f94e8\") " pod="openstack/kube-state-metrics-0" Dec 08 09:29:36 crc kubenswrapper[4662]: I1208 09:29:36.973654 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sxwm\" (UniqueName: \"kubernetes.io/projected/05001747-2338-4f97-8f09-68b9541f94e8-kube-api-access-6sxwm\") pod \"kube-state-metrics-0\" (UID: \"05001747-2338-4f97-8f09-68b9541f94e8\") " pod="openstack/kube-state-metrics-0" Dec 08 09:29:37 crc kubenswrapper[4662]: I1208 09:29:37.149784 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 08 09:29:40 crc kubenswrapper[4662]: W1208 09:29:40.032456 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3100ca_3241_444e_b279_248592e848fe.slice/crio-d4e8c39f942f99bc2c470c9e85208adecbbf94d925ca8e791c5a9732b239a28f WatchSource:0}: Error finding container d4e8c39f942f99bc2c470c9e85208adecbbf94d925ca8e791c5a9732b239a28f: Status 404 returned error can't find the container with id d4e8c39f942f99bc2c470c9e85208adecbbf94d925ca8e791c5a9732b239a28f Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.107253 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.272574 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-utilities\") pod \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.272958 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-catalog-content\") pod \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.273024 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlncm\" (UniqueName: \"kubernetes.io/projected/f2cb0463-6bcf-4dee-8f77-2bb483349cac-kube-api-access-tlncm\") pod \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\" (UID: \"f2cb0463-6bcf-4dee-8f77-2bb483349cac\") " Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.274267 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-utilities" (OuterVolumeSpecName: "utilities") pod "f2cb0463-6bcf-4dee-8f77-2bb483349cac" (UID: "f2cb0463-6bcf-4dee-8f77-2bb483349cac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.281968 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2cb0463-6bcf-4dee-8f77-2bb483349cac-kube-api-access-tlncm" (OuterVolumeSpecName: "kube-api-access-tlncm") pod "f2cb0463-6bcf-4dee-8f77-2bb483349cac" (UID: "f2cb0463-6bcf-4dee-8f77-2bb483349cac"). InnerVolumeSpecName "kube-api-access-tlncm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.285142 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f2cb0463-6bcf-4dee-8f77-2bb483349cac" (UID: "f2cb0463-6bcf-4dee-8f77-2bb483349cac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.375660 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.375959 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f2cb0463-6bcf-4dee-8f77-2bb483349cac-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.375970 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlncm\" (UniqueName: \"kubernetes.io/projected/f2cb0463-6bcf-4dee-8f77-2bb483349cac-kube-api-access-tlncm\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.393687 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2thnx" event={"ID":"f2cb0463-6bcf-4dee-8f77-2bb483349cac","Type":"ContainerDied","Data":"35bc216f6eb5fe6f0189fd2612b427ed2d4327591d2fd1a9b7dc06b695799684"} Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.393754 4662 scope.go:117] "RemoveContainer" containerID="0c2db9cc9251bc204e144f1e27025933071bd7e533af6ca8e35b4bbe0ab4f4fe" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.393873 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2thnx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.412608 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3100ca-3241-444e-b279-248592e848fe","Type":"ContainerStarted","Data":"d4e8c39f942f99bc2c470c9e85208adecbbf94d925ca8e791c5a9732b239a28f"} Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.468222 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2thnx"] Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.480230 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2thnx"] Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.722157 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" path="/var/lib/kubelet/pods/f2cb0463-6bcf-4dee-8f77-2bb483349cac/volumes" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.900820 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tx8rx"] Dec 08 09:29:40 crc kubenswrapper[4662]: E1208 09:29:40.901184 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="extract-utilities" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.901204 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="extract-utilities" Dec 08 09:29:40 crc kubenswrapper[4662]: E1208 09:29:40.901233 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="registry-server" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.901244 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="registry-server" Dec 08 09:29:40 crc kubenswrapper[4662]: E1208 09:29:40.901264 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="extract-content" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.901274 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="extract-content" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.901502 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2cb0463-6bcf-4dee-8f77-2bb483349cac" containerName="registry-server" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.902389 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.905008 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-5nzn4" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.905186 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.905325 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.913912 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tx8rx"] Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.931581 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-rbsxq"] Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.933177 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.983006 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rbsxq"] Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.988549 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f875ff2-9f06-470b-89dd-2f6215a7e40c-combined-ca-bundle\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.988631 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f875ff2-9f06-470b-89dd-2f6215a7e40c-ovn-controller-tls-certs\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.988662 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-run\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.988690 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-log-ovn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.988722 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-run-ovn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.988756 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f875ff2-9f06-470b-89dd-2f6215a7e40c-scripts\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:40 crc kubenswrapper[4662]: I1208 09:29:40.988800 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn5xn\" (UniqueName: \"kubernetes.io/projected/4f875ff2-9f06-470b-89dd-2f6215a7e40c-kube-api-access-kn5xn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093534 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-run\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093588 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-run-ovn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093611 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f875ff2-9f06-470b-89dd-2f6215a7e40c-scripts\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093657 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn5xn\" (UniqueName: \"kubernetes.io/projected/4f875ff2-9f06-470b-89dd-2f6215a7e40c-kube-api-access-kn5xn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093677 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27c667fc-f9ca-4305-aa3b-4be2ae723674-scripts\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093699 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f875ff2-9f06-470b-89dd-2f6215a7e40c-combined-ca-bundle\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093717 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-etc-ovs\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093766 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-lib\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093795 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f875ff2-9f06-470b-89dd-2f6215a7e40c-ovn-controller-tls-certs\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093818 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-log\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093838 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-run\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093853 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cf5r\" (UniqueName: \"kubernetes.io/projected/27c667fc-f9ca-4305-aa3b-4be2ae723674-kube-api-access-9cf5r\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.093879 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-log-ovn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.094534 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-run\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.094602 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-run-ovn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.094975 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f875ff2-9f06-470b-89dd-2f6215a7e40c-var-log-ovn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.100622 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f875ff2-9f06-470b-89dd-2f6215a7e40c-combined-ca-bundle\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.103681 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f875ff2-9f06-470b-89dd-2f6215a7e40c-scripts\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.107594 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f875ff2-9f06-470b-89dd-2f6215a7e40c-ovn-controller-tls-certs\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.117620 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn5xn\" (UniqueName: \"kubernetes.io/projected/4f875ff2-9f06-470b-89dd-2f6215a7e40c-kube-api-access-kn5xn\") pod \"ovn-controller-tx8rx\" (UID: \"4f875ff2-9f06-470b-89dd-2f6215a7e40c\") " pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.195668 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-log\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196191 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cf5r\" (UniqueName: \"kubernetes.io/projected/27c667fc-f9ca-4305-aa3b-4be2ae723674-kube-api-access-9cf5r\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.195954 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-log\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196247 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-run\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196323 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-run\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196462 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27c667fc-f9ca-4305-aa3b-4be2ae723674-scripts\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196496 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-etc-ovs\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196565 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-lib\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196816 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-var-lib\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.196968 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/27c667fc-f9ca-4305-aa3b-4be2ae723674-etc-ovs\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.199077 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/27c667fc-f9ca-4305-aa3b-4be2ae723674-scripts\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.230958 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cf5r\" (UniqueName: \"kubernetes.io/projected/27c667fc-f9ca-4305-aa3b-4be2ae723674-kube-api-access-9cf5r\") pod \"ovn-controller-ovs-rbsxq\" (UID: \"27c667fc-f9ca-4305-aa3b-4be2ae723674\") " pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.232864 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tx8rx" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.263552 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.580714 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.582152 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.586013 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.586473 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.586645 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nzpdg" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.586865 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.605824 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.607579 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.704415 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708636 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708688 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wg9p\" (UniqueName: \"kubernetes.io/projected/a965cd5f-6888-4033-9d26-02978e2e0f36-kube-api-access-2wg9p\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708721 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a965cd5f-6888-4033-9d26-02978e2e0f36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708759 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708804 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a965cd5f-6888-4033-9d26-02978e2e0f36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708835 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708853 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a965cd5f-6888-4033-9d26-02978e2e0f36-config\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.708888 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810129 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wg9p\" (UniqueName: \"kubernetes.io/projected/a965cd5f-6888-4033-9d26-02978e2e0f36-kube-api-access-2wg9p\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810205 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a965cd5f-6888-4033-9d26-02978e2e0f36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810225 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810301 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a965cd5f-6888-4033-9d26-02978e2e0f36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810345 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810365 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a965cd5f-6888-4033-9d26-02978e2e0f36-config\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810465 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810492 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.810860 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.812262 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a965cd5f-6888-4033-9d26-02978e2e0f36-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.814534 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/a965cd5f-6888-4033-9d26-02978e2e0f36-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.814882 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a965cd5f-6888-4033-9d26-02978e2e0f36-config\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.818009 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.819119 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.833167 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/a965cd5f-6888-4033-9d26-02978e2e0f36-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.847312 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wg9p\" (UniqueName: \"kubernetes.io/projected/a965cd5f-6888-4033-9d26-02978e2e0f36-kube-api-access-2wg9p\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.862649 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ovsdbserver-nb-0\" (UID: \"a965cd5f-6888-4033-9d26-02978e2e0f36\") " pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:41 crc kubenswrapper[4662]: I1208 09:29:41.902270 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.784554 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.788301 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.794713 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.795395 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.795504 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.797363 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-grs2j" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.813703 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.884973 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.885069 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d8d4f46-241c-490e-b219-2600ed0a74c5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.885275 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7988\" (UniqueName: \"kubernetes.io/projected/2d8d4f46-241c-490e-b219-2600ed0a74c5-kube-api-access-p7988\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.885390 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d8d4f46-241c-490e-b219-2600ed0a74c5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.885434 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.885462 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d4f46-241c-490e-b219-2600ed0a74c5-config\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.885602 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.885711 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987669 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7988\" (UniqueName: \"kubernetes.io/projected/2d8d4f46-241c-490e-b219-2600ed0a74c5-kube-api-access-p7988\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987735 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d8d4f46-241c-490e-b219-2600ed0a74c5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987767 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987784 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d4f46-241c-490e-b219-2600ed0a74c5-config\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987806 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987833 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987865 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.987881 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d8d4f46-241c-490e-b219-2600ed0a74c5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.988805 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/2d8d4f46-241c-490e-b219-2600ed0a74c5-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.989023 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.989683 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d8d4f46-241c-490e-b219-2600ed0a74c5-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.989024 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8d4f46-241c-490e-b219-2600ed0a74c5-config\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.994111 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.997210 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:44 crc kubenswrapper[4662]: I1208 09:29:44.998134 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d8d4f46-241c-490e-b219-2600ed0a74c5-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:45 crc kubenswrapper[4662]: I1208 09:29:45.012949 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:45 crc kubenswrapper[4662]: I1208 09:29:45.022167 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7988\" (UniqueName: \"kubernetes.io/projected/2d8d4f46-241c-490e-b219-2600ed0a74c5-kube-api-access-p7988\") pod \"ovsdbserver-sb-0\" (UID: \"2d8d4f46-241c-490e-b219-2600ed0a74c5\") " pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:45 crc kubenswrapper[4662]: I1208 09:29:45.129062 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Dec 08 09:29:49 crc kubenswrapper[4662]: E1208 09:29:49.317890 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Dec 08 09:29:49 crc kubenswrapper[4662]: E1208 09:29:49.318799 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6xzx8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(a9b3e5a2-0303-435d-9bd7-763b2f802e46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:29:49 crc kubenswrapper[4662]: E1208 09:29:49.322416 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" Dec 08 09:29:49 crc kubenswrapper[4662]: I1208 09:29:49.332155 4662 scope.go:117] "RemoveContainer" containerID="8879b2080833d3d6e30f64ed70d7e6319fa6eb1014f07e8a141c38dbb66d915d" Dec 08 09:29:49 crc kubenswrapper[4662]: I1208 09:29:49.493507 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b06c22d2-e96f-445d-82d6-54f276df38c8","Type":"ContainerStarted","Data":"09f1c106e5295f1f71ca12829e545479dde6683d204205e631b40ec6a95cf78f"} Dec 08 09:29:49 crc kubenswrapper[4662]: E1208 09:29:49.497192 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" Dec 08 09:29:57 crc kubenswrapper[4662]: E1208 09:29:57.019267 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Dec 08 09:29:57 crc kubenswrapper[4662]: E1208 09:29:57.019860 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zth26,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(0b3100ca-3241-444e-b279-248592e848fe): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:29:57 crc kubenswrapper[4662]: E1208 09:29:57.021520 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="0b3100ca-3241-444e-b279-248592e848fe" Dec 08 09:29:57 crc kubenswrapper[4662]: I1208 09:29:57.471006 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Dec 08 09:29:57 crc kubenswrapper[4662]: I1208 09:29:57.550308 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:29:57 crc kubenswrapper[4662]: E1208 09:29:57.562015 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="0b3100ca-3241-444e-b279-248592e848fe" Dec 08 09:29:57 crc kubenswrapper[4662]: I1208 09:29:57.854477 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-rbsxq"] Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.107611 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.107826 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sj5s2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-jzhfn_openstack(ad5b733a-89ba-4494-afd9-6994f402db46): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.109053 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.123579 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.123777 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4lh8v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-86m2d_openstack(16dac067-d9fd-42c0-ae95-8d118e6f5cba): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.124948 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" podUID="16dac067-d9fd-42c0-ae95-8d118e6f5cba" Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.127210 4662 scope.go:117] "RemoveContainer" containerID="73ffeab07ea0c787608b09517b79ff70ee4837706e3a0fbd3fb5d0df8bfb1781" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.150235 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.150540 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpxkv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-txtq9_openstack(e49b4397-c667-4a8f-b5c1-c33071fcb8f0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.151716 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" podUID="e49b4397-c667-4a8f-b5c1-c33071fcb8f0" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.200963 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.203060 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmgbk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-jjmkp_openstack(cb1d60dc-cf58-4298-bc3e-71eb70bcd64a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.204893 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.571518 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05001747-2338-4f97-8f09-68b9541f94e8","Type":"ContainerStarted","Data":"11d5cefe74c6a0927dc5ff96853924aa6365f8e4edfe96cd7f9ce2917d3850ba"} Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.575919 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b06c22d2-e96f-445d-82d6-54f276df38c8","Type":"ContainerStarted","Data":"47c49d4f9c0ee21742cacab58f6c2106dfacfbb4f427d190c5bda3037229af9c"} Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.577774 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rbsxq" event={"ID":"27c667fc-f9ca-4305-aa3b-4be2ae723674","Type":"ContainerStarted","Data":"79662f50339a579dca4fd1a0b32575297958b1990202c23330a091bbbca7e5ca"} Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.580004 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0740d08e-8e81-4133-9969-7b777cfef0f7","Type":"ContainerStarted","Data":"222786949d6340cf32aafddd4fd4ef93e37bbe96d9e7bc8b3518dc155e507058"} Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.591933 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" Dec 08 09:29:58 crc kubenswrapper[4662]: E1208 09:29:58.592861 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.750073 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tx8rx"] Dec 08 09:29:58 crc kubenswrapper[4662]: W1208 09:29:58.812956 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f875ff2_9f06_470b_89dd_2f6215a7e40c.slice/crio-df30ba90980cbd84b1e4d4bf1d64b425b72187a5b0e70841b203e80288deeff8 WatchSource:0}: Error finding container df30ba90980cbd84b1e4d4bf1d64b425b72187a5b0e70841b203e80288deeff8: Status 404 returned error can't find the container with id df30ba90980cbd84b1e4d4bf1d64b425b72187a5b0e70841b203e80288deeff8 Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.869867 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Dec 08 09:29:58 crc kubenswrapper[4662]: I1208 09:29:58.989271 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Dec 08 09:29:59 crc kubenswrapper[4662]: W1208 09:29:59.220012 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda965cd5f_6888_4033_9d26_02978e2e0f36.slice/crio-7580e11fc6d9a7439695808df8834ba5002ccf689adc324e4d927885dfb51df5 WatchSource:0}: Error finding container 7580e11fc6d9a7439695808df8834ba5002ccf689adc324e4d927885dfb51df5: Status 404 returned error can't find the container with id 7580e11fc6d9a7439695808df8834ba5002ccf689adc324e4d927885dfb51df5 Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.265710 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.277010 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.371674 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-config\") pod \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.372080 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpxkv\" (UniqueName: \"kubernetes.io/projected/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-kube-api-access-bpxkv\") pod \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\" (UID: \"e49b4397-c667-4a8f-b5c1-c33071fcb8f0\") " Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.372227 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-config" (OuterVolumeSpecName: "config") pod "e49b4397-c667-4a8f-b5c1-c33071fcb8f0" (UID: "e49b4397-c667-4a8f-b5c1-c33071fcb8f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.372268 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-config\") pod \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.372293 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-dns-svc\") pod \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.372331 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lh8v\" (UniqueName: \"kubernetes.io/projected/16dac067-d9fd-42c0-ae95-8d118e6f5cba-kube-api-access-4lh8v\") pod \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\" (UID: \"16dac067-d9fd-42c0-ae95-8d118e6f5cba\") " Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.372793 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-config" (OuterVolumeSpecName: "config") pod "16dac067-d9fd-42c0-ae95-8d118e6f5cba" (UID: "16dac067-d9fd-42c0-ae95-8d118e6f5cba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.372870 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "16dac067-d9fd-42c0-ae95-8d118e6f5cba" (UID: "16dac067-d9fd-42c0-ae95-8d118e6f5cba"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.373338 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.373354 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/16dac067-d9fd-42c0-ae95-8d118e6f5cba-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.373363 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.376931 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16dac067-d9fd-42c0-ae95-8d118e6f5cba-kube-api-access-4lh8v" (OuterVolumeSpecName: "kube-api-access-4lh8v") pod "16dac067-d9fd-42c0-ae95-8d118e6f5cba" (UID: "16dac067-d9fd-42c0-ae95-8d118e6f5cba"). InnerVolumeSpecName "kube-api-access-4lh8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.377011 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-kube-api-access-bpxkv" (OuterVolumeSpecName: "kube-api-access-bpxkv") pod "e49b4397-c667-4a8f-b5c1-c33071fcb8f0" (UID: "e49b4397-c667-4a8f-b5c1-c33071fcb8f0"). InnerVolumeSpecName "kube-api-access-bpxkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.474529 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lh8v\" (UniqueName: \"kubernetes.io/projected/16dac067-d9fd-42c0-ae95-8d118e6f5cba-kube-api-access-4lh8v\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.474569 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpxkv\" (UniqueName: \"kubernetes.io/projected/e49b4397-c667-4a8f-b5c1-c33071fcb8f0-kube-api-access-bpxkv\") on node \"crc\" DevicePath \"\"" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.588499 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" event={"ID":"e49b4397-c667-4a8f-b5c1-c33071fcb8f0","Type":"ContainerDied","Data":"870b457ed6d919bed4324e0856f929f322ed6bb539899c2ef3c50351649ca814"} Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.588562 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-txtq9" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.589879 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2d8d4f46-241c-490e-b219-2600ed0a74c5","Type":"ContainerStarted","Data":"b2192024d3b355df04d3ba6e8b37ee93808ad07287db40cdcb217cb759f64a99"} Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.593607 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a965cd5f-6888-4033-9d26-02978e2e0f36","Type":"ContainerStarted","Data":"7580e11fc6d9a7439695808df8834ba5002ccf689adc324e4d927885dfb51df5"} Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.594908 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" event={"ID":"16dac067-d9fd-42c0-ae95-8d118e6f5cba","Type":"ContainerDied","Data":"e7c575b871b1d7d41a90c874ac6635c1b599e6f8b52a57f9be69ed5f6c58e78a"} Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.595106 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-86m2d" Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.608055 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tx8rx" event={"ID":"4f875ff2-9f06-470b-89dd-2f6215a7e40c","Type":"ContainerStarted","Data":"df30ba90980cbd84b1e4d4bf1d64b425b72187a5b0e70841b203e80288deeff8"} Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.615890 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9f9be7d-4423-489a-a794-e022a83c9e51","Type":"ContainerStarted","Data":"b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881"} Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.735064 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-86m2d"] Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.751803 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-86m2d"] Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.826381 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-txtq9"] Dec 08 09:29:59 crc kubenswrapper[4662]: I1208 09:29:59.850167 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-txtq9"] Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.161328 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97"] Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.162584 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.167680 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.167882 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.171757 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97"] Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.192942 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f588abef-a98b-4281-bce4-b5871fea382b-config-volume\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.193043 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dplfm\" (UniqueName: \"kubernetes.io/projected/f588abef-a98b-4281-bce4-b5871fea382b-kube-api-access-dplfm\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.193066 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f588abef-a98b-4281-bce4-b5871fea382b-secret-volume\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.294341 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dplfm\" (UniqueName: \"kubernetes.io/projected/f588abef-a98b-4281-bce4-b5871fea382b-kube-api-access-dplfm\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.294393 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f588abef-a98b-4281-bce4-b5871fea382b-secret-volume\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.294461 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f588abef-a98b-4281-bce4-b5871fea382b-config-volume\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.295316 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f588abef-a98b-4281-bce4-b5871fea382b-config-volume\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.301936 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f588abef-a98b-4281-bce4-b5871fea382b-secret-volume\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.320874 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dplfm\" (UniqueName: \"kubernetes.io/projected/f588abef-a98b-4281-bce4-b5871fea382b-kube-api-access-dplfm\") pod \"collect-profiles-29419770-dsq97\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.482834 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.711445 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16dac067-d9fd-42c0-ae95-8d118e6f5cba" path="/var/lib/kubelet/pods/16dac067-d9fd-42c0-ae95-8d118e6f5cba/volumes" Dec 08 09:30:00 crc kubenswrapper[4662]: I1208 09:30:00.712316 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e49b4397-c667-4a8f-b5c1-c33071fcb8f0" path="/var/lib/kubelet/pods/e49b4397-c667-4a8f-b5c1-c33071fcb8f0/volumes" Dec 08 09:30:02 crc kubenswrapper[4662]: I1208 09:30:02.611623 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:30:02 crc kubenswrapper[4662]: I1208 09:30:02.611975 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:30:03 crc kubenswrapper[4662]: I1208 09:30:03.651538 4662 generic.go:334] "Generic (PLEG): container finished" podID="b06c22d2-e96f-445d-82d6-54f276df38c8" containerID="47c49d4f9c0ee21742cacab58f6c2106dfacfbb4f427d190c5bda3037229af9c" exitCode=0 Dec 08 09:30:03 crc kubenswrapper[4662]: I1208 09:30:03.651581 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b06c22d2-e96f-445d-82d6-54f276df38c8","Type":"ContainerDied","Data":"47c49d4f9c0ee21742cacab58f6c2106dfacfbb4f427d190c5bda3037229af9c"} Dec 08 09:30:03 crc kubenswrapper[4662]: I1208 09:30:03.942047 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97"] Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.030573 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xm7lz"] Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.038684 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.053355 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm7lz"] Dec 08 09:30:04 crc kubenswrapper[4662]: W1208 09:30:04.069566 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf588abef_a98b_4281_bce4_b5871fea382b.slice/crio-0925b1642a9e8960b1366a452bdcc78eab4d461a7543ad5e51f4950e2c4e2819 WatchSource:0}: Error finding container 0925b1642a9e8960b1366a452bdcc78eab4d461a7543ad5e51f4950e2c4e2819: Status 404 returned error can't find the container with id 0925b1642a9e8960b1366a452bdcc78eab4d461a7543ad5e51f4950e2c4e2819 Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.160549 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-catalog-content\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.160602 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-utilities\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.160665 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wz86\" (UniqueName: \"kubernetes.io/projected/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-kube-api-access-5wz86\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.261707 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-utilities\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.261862 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5wz86\" (UniqueName: \"kubernetes.io/projected/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-kube-api-access-5wz86\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.261937 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-catalog-content\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.262408 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-catalog-content\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.262429 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-utilities\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.279191 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wz86\" (UniqueName: \"kubernetes.io/projected/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-kube-api-access-5wz86\") pod \"certified-operators-xm7lz\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.414841 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:04 crc kubenswrapper[4662]: I1208 09:30:04.662672 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" event={"ID":"f588abef-a98b-4281-bce4-b5871fea382b","Type":"ContainerStarted","Data":"0925b1642a9e8960b1366a452bdcc78eab4d461a7543ad5e51f4950e2c4e2819"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.695997 4662 generic.go:334] "Generic (PLEG): container finished" podID="f588abef-a98b-4281-bce4-b5871fea382b" containerID="a8b974ef5b2026ff0e6f566649706e8903e8ee66dfbb0502c86ca0dcb6e5d42e" exitCode=0 Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.696175 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" event={"ID":"f588abef-a98b-4281-bce4-b5871fea382b","Type":"ContainerDied","Data":"a8b974ef5b2026ff0e6f566649706e8903e8ee66dfbb0502c86ca0dcb6e5d42e"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.704103 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a965cd5f-6888-4033-9d26-02978e2e0f36","Type":"ContainerStarted","Data":"dbce295288ace97ea463041ba3be3d6b3cea7753512e6a29b5c76d11fbb86145"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.736721 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b06c22d2-e96f-445d-82d6-54f276df38c8","Type":"ContainerStarted","Data":"d70d1d700093a8bcf5f03287a7504d9c0059830773f954dd3044e9b7fd86eeee"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.755244 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rbsxq" event={"ID":"27c667fc-f9ca-4305-aa3b-4be2ae723674","Type":"ContainerStarted","Data":"19c9185fcf9b303cab536f1520080dae2cea0d2e980feb4382f42b1cad12ce44"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.758918 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xm7lz"] Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.762962 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0740d08e-8e81-4133-9969-7b777cfef0f7","Type":"ContainerStarted","Data":"09d487b0fabea98f300d6db264491e34d0ceb442518366747d8029da337138bc"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.763881 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.773916 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05001747-2338-4f97-8f09-68b9541f94e8","Type":"ContainerStarted","Data":"646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.774341 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.776850 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=23.919706317 podStartE2EDuration="32.776840447s" podCreationTimestamp="2025-12-08 09:29:33 +0000 UTC" firstStartedPulling="2025-12-08 09:29:49.311307073 +0000 UTC m=+912.880335063" lastFinishedPulling="2025-12-08 09:29:58.168441203 +0000 UTC m=+921.737469193" observedRunningTime="2025-12-08 09:30:05.764145793 +0000 UTC m=+929.333173803" watchObservedRunningTime="2025-12-08 09:30:05.776840447 +0000 UTC m=+929.345868427" Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.790673 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9b3e5a2-0303-435d-9bd7-763b2f802e46","Type":"ContainerStarted","Data":"3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.795076 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=25.291141846 podStartE2EDuration="30.79505817s" podCreationTimestamp="2025-12-08 09:29:35 +0000 UTC" firstStartedPulling="2025-12-08 09:29:58.150400715 +0000 UTC m=+921.719428705" lastFinishedPulling="2025-12-08 09:30:03.654317039 +0000 UTC m=+927.223345029" observedRunningTime="2025-12-08 09:30:05.781675868 +0000 UTC m=+929.350703858" watchObservedRunningTime="2025-12-08 09:30:05.79505817 +0000 UTC m=+929.364086160" Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.809228 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2d8d4f46-241c-490e-b219-2600ed0a74c5","Type":"ContainerStarted","Data":"4f271e5b4e0a25d1df43cea90f0b17fd18a84ea480da9919d60c2938478ff00e"} Dec 08 09:30:05 crc kubenswrapper[4662]: I1208 09:30:05.823367 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=22.739806402 podStartE2EDuration="29.823349275s" podCreationTimestamp="2025-12-08 09:29:36 +0000 UTC" firstStartedPulling="2025-12-08 09:29:58.149987864 +0000 UTC m=+921.719015854" lastFinishedPulling="2025-12-08 09:30:05.233530737 +0000 UTC m=+928.802558727" observedRunningTime="2025-12-08 09:30:05.821999959 +0000 UTC m=+929.391027949" watchObservedRunningTime="2025-12-08 09:30:05.823349275 +0000 UTC m=+929.392377265" Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.820716 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tx8rx" event={"ID":"4f875ff2-9f06-470b-89dd-2f6215a7e40c","Type":"ContainerStarted","Data":"790deaad570f61db86898a97d529638636bfbb95a29701e65db6bd9c029f91e5"} Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.821892 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-tx8rx" Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.824318 4662 generic.go:334] "Generic (PLEG): container finished" podID="27c667fc-f9ca-4305-aa3b-4be2ae723674" containerID="19c9185fcf9b303cab536f1520080dae2cea0d2e980feb4382f42b1cad12ce44" exitCode=0 Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.824364 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rbsxq" event={"ID":"27c667fc-f9ca-4305-aa3b-4be2ae723674","Type":"ContainerDied","Data":"19c9185fcf9b303cab536f1520080dae2cea0d2e980feb4382f42b1cad12ce44"} Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.828432 4662 generic.go:334] "Generic (PLEG): container finished" podID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerID="b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e" exitCode=0 Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.828780 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm7lz" event={"ID":"83624e69-0eea-46dc-8f70-c5c9e87b9ca8","Type":"ContainerDied","Data":"b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e"} Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.828824 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm7lz" event={"ID":"83624e69-0eea-46dc-8f70-c5c9e87b9ca8","Type":"ContainerStarted","Data":"657d5832cbe6735a4b01817124ed8b4cd87a9f12f6fbb99d52fbe3ef0b5f896f"} Dec 08 09:30:06 crc kubenswrapper[4662]: I1208 09:30:06.843841 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-tx8rx" podStartSLOduration=20.507280093 podStartE2EDuration="26.843817195s" podCreationTimestamp="2025-12-08 09:29:40 +0000 UTC" firstStartedPulling="2025-12-08 09:29:58.82214517 +0000 UTC m=+922.391173160" lastFinishedPulling="2025-12-08 09:30:05.158682272 +0000 UTC m=+928.727710262" observedRunningTime="2025-12-08 09:30:06.837135584 +0000 UTC m=+930.406163594" watchObservedRunningTime="2025-12-08 09:30:06.843817195 +0000 UTC m=+930.412845205" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.205221 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.325162 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f588abef-a98b-4281-bce4-b5871fea382b-secret-volume\") pod \"f588abef-a98b-4281-bce4-b5871fea382b\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.325198 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dplfm\" (UniqueName: \"kubernetes.io/projected/f588abef-a98b-4281-bce4-b5871fea382b-kube-api-access-dplfm\") pod \"f588abef-a98b-4281-bce4-b5871fea382b\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.325295 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f588abef-a98b-4281-bce4-b5871fea382b-config-volume\") pod \"f588abef-a98b-4281-bce4-b5871fea382b\" (UID: \"f588abef-a98b-4281-bce4-b5871fea382b\") " Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.326262 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f588abef-a98b-4281-bce4-b5871fea382b-config-volume" (OuterVolumeSpecName: "config-volume") pod "f588abef-a98b-4281-bce4-b5871fea382b" (UID: "f588abef-a98b-4281-bce4-b5871fea382b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.330708 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f588abef-a98b-4281-bce4-b5871fea382b-kube-api-access-dplfm" (OuterVolumeSpecName: "kube-api-access-dplfm") pod "f588abef-a98b-4281-bce4-b5871fea382b" (UID: "f588abef-a98b-4281-bce4-b5871fea382b"). InnerVolumeSpecName "kube-api-access-dplfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.335924 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f588abef-a98b-4281-bce4-b5871fea382b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f588abef-a98b-4281-bce4-b5871fea382b" (UID: "f588abef-a98b-4281-bce4-b5871fea382b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.426654 4662 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f588abef-a98b-4281-bce4-b5871fea382b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.426944 4662 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f588abef-a98b-4281-bce4-b5871fea382b-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.426955 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dplfm\" (UniqueName: \"kubernetes.io/projected/f588abef-a98b-4281-bce4-b5871fea382b-kube-api-access-dplfm\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.839534 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rbsxq" event={"ID":"27c667fc-f9ca-4305-aa3b-4be2ae723674","Type":"ContainerStarted","Data":"f5da9ece7e701e5b64abf1dc47753278c4227fdc4e1f643db7f3d2c48057ba18"} Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.839578 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-rbsxq" event={"ID":"27c667fc-f9ca-4305-aa3b-4be2ae723674","Type":"ContainerStarted","Data":"690a0ee0e487182cdab702f99827e97c81e7182e371678b4233d494442f20bb0"} Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.839910 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.839951 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.844950 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.847871 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419770-dsq97" event={"ID":"f588abef-a98b-4281-bce4-b5871fea382b","Type":"ContainerDied","Data":"0925b1642a9e8960b1366a452bdcc78eab4d461a7543ad5e51f4950e2c4e2819"} Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.847938 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0925b1642a9e8960b1366a452bdcc78eab4d461a7543ad5e51f4950e2c4e2819" Dec 08 09:30:07 crc kubenswrapper[4662]: I1208 09:30:07.877120 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-rbsxq" podStartSLOduration=22.375584532 podStartE2EDuration="27.877103261s" podCreationTimestamp="2025-12-08 09:29:40 +0000 UTC" firstStartedPulling="2025-12-08 09:29:58.150268951 +0000 UTC m=+921.719296941" lastFinishedPulling="2025-12-08 09:30:03.65178768 +0000 UTC m=+927.220815670" observedRunningTime="2025-12-08 09:30:07.866303779 +0000 UTC m=+931.435331779" watchObservedRunningTime="2025-12-08 09:30:07.877103261 +0000 UTC m=+931.446131251" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.418904 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m7stx"] Dec 08 09:30:09 crc kubenswrapper[4662]: E1208 09:30:09.419228 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f588abef-a98b-4281-bce4-b5871fea382b" containerName="collect-profiles" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.419248 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f588abef-a98b-4281-bce4-b5871fea382b" containerName="collect-profiles" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.420255 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f588abef-a98b-4281-bce4-b5871fea382b" containerName="collect-profiles" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.421509 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.428558 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7stx"] Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.467519 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-catalog-content\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.467782 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-utilities\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.467858 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknzx\" (UniqueName: \"kubernetes.io/projected/7f968b2e-40d0-443c-acb6-931c8115e1eb-kube-api-access-wknzx\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.569112 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-utilities\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.569449 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknzx\" (UniqueName: \"kubernetes.io/projected/7f968b2e-40d0-443c-acb6-931c8115e1eb-kube-api-access-wknzx\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.569537 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-catalog-content\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.569704 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-utilities\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.569957 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-catalog-content\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.590906 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknzx\" (UniqueName: \"kubernetes.io/projected/7f968b2e-40d0-443c-acb6-931c8115e1eb-kube-api-access-wknzx\") pod \"community-operators-m7stx\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:09 crc kubenswrapper[4662]: I1208 09:30:09.749527 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:10 crc kubenswrapper[4662]: I1208 09:30:10.603504 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.139057 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m7stx"] Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.896600 4662 generic.go:334] "Generic (PLEG): container finished" podID="ad5b733a-89ba-4494-afd9-6994f402db46" containerID="8fd09dce72e03bef003ac41ad630758c1e0a33ab74aefc19c0812af4d36e11e4" exitCode=0 Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.896683 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" event={"ID":"ad5b733a-89ba-4494-afd9-6994f402db46","Type":"ContainerDied","Data":"8fd09dce72e03bef003ac41ad630758c1e0a33ab74aefc19c0812af4d36e11e4"} Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.898158 4662 generic.go:334] "Generic (PLEG): container finished" podID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerID="a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e" exitCode=0 Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.898225 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7stx" event={"ID":"7f968b2e-40d0-443c-acb6-931c8115e1eb","Type":"ContainerDied","Data":"a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e"} Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.898245 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7stx" event={"ID":"7f968b2e-40d0-443c-acb6-931c8115e1eb","Type":"ContainerStarted","Data":"6d78ab1bf23af38d44b3c51bddc6def4ad71bd16a8ebc7e82feb4e0622f96c07"} Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.899851 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"2d8d4f46-241c-490e-b219-2600ed0a74c5","Type":"ContainerStarted","Data":"aa0463980ea2d8e88c387ba18e0ee1d48463143ade7c2b213b0fc56b10f3363e"} Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.901182 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3100ca-3241-444e-b279-248592e848fe","Type":"ContainerStarted","Data":"a5a070e3a238ea6d3d7dff0597294f9ecb6c180037e8a7d849fa2107c6d10ce8"} Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.903641 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"a965cd5f-6888-4033-9d26-02978e2e0f36","Type":"ContainerStarted","Data":"8613c7de96d2c01e5da4facbe9a7e3dfed5cfe4036798e730a4120d0273deb6c"} Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.909030 4662 generic.go:334] "Generic (PLEG): container finished" podID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerID="8408c085b611759042d544c3f52ff9719fed5c8acbd5b73b4aaa22552581142b" exitCode=0 Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.909286 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" event={"ID":"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a","Type":"ContainerDied","Data":"8408c085b611759042d544c3f52ff9719fed5c8acbd5b73b4aaa22552581142b"} Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.935284 4662 generic.go:334] "Generic (PLEG): container finished" podID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerID="631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6" exitCode=0 Dec 08 09:30:11 crc kubenswrapper[4662]: I1208 09:30:11.935326 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm7lz" event={"ID":"83624e69-0eea-46dc-8f70-c5c9e87b9ca8","Type":"ContainerDied","Data":"631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6"} Dec 08 09:30:12 crc kubenswrapper[4662]: I1208 09:30:12.056802 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.406575059 podStartE2EDuration="29.056784918s" podCreationTimestamp="2025-12-08 09:29:43 +0000 UTC" firstStartedPulling="2025-12-08 09:29:59.010187627 +0000 UTC m=+922.579215617" lastFinishedPulling="2025-12-08 09:30:10.660397496 +0000 UTC m=+934.229425476" observedRunningTime="2025-12-08 09:30:12.048222416 +0000 UTC m=+935.617250406" watchObservedRunningTime="2025-12-08 09:30:12.056784918 +0000 UTC m=+935.625812908" Dec 08 09:30:12 crc kubenswrapper[4662]: I1208 09:30:12.080001 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=20.408180192 podStartE2EDuration="32.079982025s" podCreationTimestamp="2025-12-08 09:29:40 +0000 UTC" firstStartedPulling="2025-12-08 09:29:59.224865856 +0000 UTC m=+922.793893846" lastFinishedPulling="2025-12-08 09:30:10.896667689 +0000 UTC m=+934.465695679" observedRunningTime="2025-12-08 09:30:12.078618658 +0000 UTC m=+935.647646648" watchObservedRunningTime="2025-12-08 09:30:12.079982025 +0000 UTC m=+935.649010015" Dec 08 09:30:12 crc kubenswrapper[4662]: I1208 09:30:12.130102 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Dec 08 09:30:12 crc kubenswrapper[4662]: I1208 09:30:12.237122 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Dec 08 09:30:12 crc kubenswrapper[4662]: I1208 09:30:12.942716 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.006981 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.282830 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jjmkp"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.317437 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-ndntj"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.319373 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.326930 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.357976 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-ndntj"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.444199 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzlb\" (UniqueName: \"kubernetes.io/projected/6ab24c97-4d09-4090-8122-529e0d6d3d0b-kube-api-access-8bzlb\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.444289 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.444317 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.444407 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-config\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.545942 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-config\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.546076 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bzlb\" (UniqueName: \"kubernetes.io/projected/6ab24c97-4d09-4090-8122-529e0d6d3d0b-kube-api-access-8bzlb\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.546100 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.546132 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.547036 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.547598 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-config\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.550098 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.558832 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-5wz6p"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.562605 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.571089 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.571158 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5wz6p"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.582406 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bzlb\" (UniqueName: \"kubernetes.io/projected/6ab24c97-4d09-4090-8122-529e0d6d3d0b-kube-api-access-8bzlb\") pod \"dnsmasq-dns-6bc7876d45-ndntj\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.643193 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.648182 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4760fc65-89b9-4df9-90ca-ab6e968955dd-config\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.648356 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760fc65-89b9-4df9-90ca-ab6e968955dd-combined-ca-bundle\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.648473 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760fc65-89b9-4df9-90ca-ab6e968955dd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.648955 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8q2n\" (UniqueName: \"kubernetes.io/projected/4760fc65-89b9-4df9-90ca-ab6e968955dd-kube-api-access-t8q2n\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.649097 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4760fc65-89b9-4df9-90ca-ab6e968955dd-ovn-rundir\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.649249 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4760fc65-89b9-4df9-90ca-ab6e968955dd-ovs-rundir\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.750103 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4760fc65-89b9-4df9-90ca-ab6e968955dd-config\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.750176 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760fc65-89b9-4df9-90ca-ab6e968955dd-combined-ca-bundle\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.750205 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760fc65-89b9-4df9-90ca-ab6e968955dd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.750248 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8q2n\" (UniqueName: \"kubernetes.io/projected/4760fc65-89b9-4df9-90ca-ab6e968955dd-kube-api-access-t8q2n\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.750266 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4760fc65-89b9-4df9-90ca-ab6e968955dd-ovn-rundir\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.750299 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4760fc65-89b9-4df9-90ca-ab6e968955dd-ovs-rundir\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.750825 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4760fc65-89b9-4df9-90ca-ab6e968955dd-ovs-rundir\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.751046 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4760fc65-89b9-4df9-90ca-ab6e968955dd-config\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.751126 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4760fc65-89b9-4df9-90ca-ab6e968955dd-ovn-rundir\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.756121 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4760fc65-89b9-4df9-90ca-ab6e968955dd-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.770036 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4760fc65-89b9-4df9-90ca-ab6e968955dd-combined-ca-bundle\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.780503 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8q2n\" (UniqueName: \"kubernetes.io/projected/4760fc65-89b9-4df9-90ca-ab6e968955dd-kube-api-access-t8q2n\") pod \"ovn-controller-metrics-5wz6p\" (UID: \"4760fc65-89b9-4df9-90ca-ab6e968955dd\") " pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.782727 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jzhfn"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.810967 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-zrgj8"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.813371 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.817832 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.834922 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zrgj8"] Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.917932 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-5wz6p" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.953639 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-config\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.953702 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.953894 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxpkp\" (UniqueName: \"kubernetes.io/projected/bd6ad89b-50ff-42c9-91a1-36f23be3568b-kube-api-access-kxpkp\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.953916 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-dns-svc\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.953964 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.989376 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" event={"ID":"ad5b733a-89ba-4494-afd9-6994f402db46","Type":"ContainerStarted","Data":"71b88bb322dd2e7be75b240514708f2fc0df1e50834d5d4752c8946dfdc17eba"} Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.989511 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" containerName="dnsmasq-dns" containerID="cri-o://71b88bb322dd2e7be75b240514708f2fc0df1e50834d5d4752c8946dfdc17eba" gracePeriod=10 Dec 08 09:30:13 crc kubenswrapper[4662]: I1208 09:30:13.989573 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.004158 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerName="dnsmasq-dns" containerID="cri-o://d3f68255021634c4146e2278302765a07e84fc8095578b4145531356ab266927" gracePeriod=10 Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.004217 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" event={"ID":"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a","Type":"ContainerStarted","Data":"d3f68255021634c4146e2278302765a07e84fc8095578b4145531356ab266927"} Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.004267 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.050950 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" podStartSLOduration=-9223371992.803846 podStartE2EDuration="44.050929372s" podCreationTimestamp="2025-12-08 09:29:30 +0000 UTC" firstStartedPulling="2025-12-08 09:29:31.572687412 +0000 UTC m=+895.141715402" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:14.050576552 +0000 UTC m=+937.619604542" watchObservedRunningTime="2025-12-08 09:30:14.050929372 +0000 UTC m=+937.619957362" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.058157 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxpkp\" (UniqueName: \"kubernetes.io/projected/bd6ad89b-50ff-42c9-91a1-36f23be3568b-kube-api-access-kxpkp\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.058194 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-dns-svc\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.058217 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.058401 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-config\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.058464 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.060515 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-config\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.060531 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.061093 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.062046 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-dns-svc\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.079531 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" podStartSLOduration=5.27800628 podStartE2EDuration="44.079511924s" podCreationTimestamp="2025-12-08 09:29:30 +0000 UTC" firstStartedPulling="2025-12-08 09:29:31.844595025 +0000 UTC m=+895.413623015" lastFinishedPulling="2025-12-08 09:30:10.646100669 +0000 UTC m=+934.215128659" observedRunningTime="2025-12-08 09:30:14.075327761 +0000 UTC m=+937.644355751" watchObservedRunningTime="2025-12-08 09:30:14.079511924 +0000 UTC m=+937.648539914" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.098063 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxpkp\" (UniqueName: \"kubernetes.io/projected/bd6ad89b-50ff-42c9-91a1-36f23be3568b-kube-api-access-kxpkp\") pod \"dnsmasq-dns-8554648995-zrgj8\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.185101 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.398359 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-ndntj"] Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.563180 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-5wz6p"] Dec 08 09:30:14 crc kubenswrapper[4662]: W1208 09:30:14.567650 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4760fc65_89b9_4df9_90ca_ab6e968955dd.slice/crio-97fb14621a88a8a26260816170aca2cc7f89caaf608a96e5c9be9d4743f30503 WatchSource:0}: Error finding container 97fb14621a88a8a26260816170aca2cc7f89caaf608a96e5c9be9d4743f30503: Status 404 returned error can't find the container with id 97fb14621a88a8a26260816170aca2cc7f89caaf608a96e5c9be9d4743f30503 Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.718617 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zrgj8"] Dec 08 09:30:14 crc kubenswrapper[4662]: W1208 09:30:14.731442 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd6ad89b_50ff_42c9_91a1_36f23be3568b.slice/crio-fd1890c858a8a04b64038f9512a3065a7a99dca248ef8a72527f13323c9b5b21 WatchSource:0}: Error finding container fd1890c858a8a04b64038f9512a3065a7a99dca248ef8a72527f13323c9b5b21: Status 404 returned error can't find the container with id fd1890c858a8a04b64038f9512a3065a7a99dca248ef8a72527f13323c9b5b21 Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.904038 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Dec 08 09:30:14 crc kubenswrapper[4662]: I1208 09:30:14.950848 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.013517 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm7lz" event={"ID":"83624e69-0eea-46dc-8f70-c5c9e87b9ca8","Type":"ContainerStarted","Data":"c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53"} Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.015657 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zrgj8" event={"ID":"bd6ad89b-50ff-42c9-91a1-36f23be3568b","Type":"ContainerStarted","Data":"fd1890c858a8a04b64038f9512a3065a7a99dca248ef8a72527f13323c9b5b21"} Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.018197 4662 generic.go:334] "Generic (PLEG): container finished" podID="ad5b733a-89ba-4494-afd9-6994f402db46" containerID="71b88bb322dd2e7be75b240514708f2fc0df1e50834d5d4752c8946dfdc17eba" exitCode=0 Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.018258 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" event={"ID":"ad5b733a-89ba-4494-afd9-6994f402db46","Type":"ContainerDied","Data":"71b88bb322dd2e7be75b240514708f2fc0df1e50834d5d4752c8946dfdc17eba"} Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.019627 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" event={"ID":"6ab24c97-4d09-4090-8122-529e0d6d3d0b","Type":"ContainerStarted","Data":"5c998b9f72466de30789c0b84a988fe4adab328382973cb6b45e93c4c9342f40"} Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.021322 4662 generic.go:334] "Generic (PLEG): container finished" podID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerID="e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470" exitCode=0 Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.021365 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7stx" event={"ID":"7f968b2e-40d0-443c-acb6-931c8115e1eb","Type":"ContainerDied","Data":"e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470"} Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.023817 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wz6p" event={"ID":"4760fc65-89b9-4df9-90ca-ab6e968955dd","Type":"ContainerStarted","Data":"97fb14621a88a8a26260816170aca2cc7f89caaf608a96e5c9be9d4743f30503"} Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.028158 4662 generic.go:334] "Generic (PLEG): container finished" podID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerID="d3f68255021634c4146e2278302765a07e84fc8095578b4145531356ab266927" exitCode=0 Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.029019 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" event={"ID":"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a","Type":"ContainerDied","Data":"d3f68255021634c4146e2278302765a07e84fc8095578b4145531356ab266927"} Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.029577 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.072436 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xm7lz" podStartSLOduration=4.157001575 podStartE2EDuration="11.072414939s" podCreationTimestamp="2025-12-08 09:30:04 +0000 UTC" firstStartedPulling="2025-12-08 09:30:06.832093608 +0000 UTC m=+930.401121598" lastFinishedPulling="2025-12-08 09:30:13.747506972 +0000 UTC m=+937.316534962" observedRunningTime="2025-12-08 09:30:15.045378837 +0000 UTC m=+938.614406847" watchObservedRunningTime="2025-12-08 09:30:15.072414939 +0000 UTC m=+938.641442929" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.079735 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.171538 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.171885 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.270811 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.272458 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.276665 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.278031 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.278169 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.283281 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-ttv5t" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.312948 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc55a4a3-d846-465b-914e-225c9ee2bfc5-config\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.313007 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc55a4a3-d846-465b-914e-225c9ee2bfc5-scripts\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.313030 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.313064 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.313088 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.313187 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2fsf\" (UniqueName: \"kubernetes.io/projected/fc55a4a3-d846-465b-914e-225c9ee2bfc5-kube-api-access-n2fsf\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.313214 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fc55a4a3-d846-465b-914e-225c9ee2bfc5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.336187 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.414605 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.414653 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.414721 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2fsf\" (UniqueName: \"kubernetes.io/projected/fc55a4a3-d846-465b-914e-225c9ee2bfc5-kube-api-access-n2fsf\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.414756 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fc55a4a3-d846-465b-914e-225c9ee2bfc5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.414804 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc55a4a3-d846-465b-914e-225c9ee2bfc5-config\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.414826 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc55a4a3-d846-465b-914e-225c9ee2bfc5-scripts\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.414843 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.418271 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc55a4a3-d846-465b-914e-225c9ee2bfc5-config\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.418799 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/fc55a4a3-d846-465b-914e-225c9ee2bfc5-scripts\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.419939 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/fc55a4a3-d846-465b-914e-225c9ee2bfc5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.421389 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.421480 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.424018 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc55a4a3-d846-465b-914e-225c9ee2bfc5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.436667 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2fsf\" (UniqueName: \"kubernetes.io/projected/fc55a4a3-d846-465b-914e-225c9ee2bfc5-kube-api-access-n2fsf\") pod \"ovn-northd-0\" (UID: \"fc55a4a3-d846-465b-914e-225c9ee2bfc5\") " pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.452476 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.471782 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.516319 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vmgbk\" (UniqueName: \"kubernetes.io/projected/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-kube-api-access-vmgbk\") pod \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.516637 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-dns-svc\") pod \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.516813 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-config\") pod \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\" (UID: \"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a\") " Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.537917 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-kube-api-access-vmgbk" (OuterVolumeSpecName: "kube-api-access-vmgbk") pod "cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" (UID: "cb1d60dc-cf58-4298-bc3e-71eb70bcd64a"). InnerVolumeSpecName "kube-api-access-vmgbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.568179 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-config" (OuterVolumeSpecName: "config") pod "cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" (UID: "cb1d60dc-cf58-4298-bc3e-71eb70bcd64a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.595245 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.597855 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" (UID: "cb1d60dc-cf58-4298-bc3e-71eb70bcd64a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.622021 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vmgbk\" (UniqueName: \"kubernetes.io/projected/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-kube-api-access-vmgbk\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.622056 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.622066 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:15 crc kubenswrapper[4662]: I1208 09:30:15.929759 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.026515 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-dns-svc\") pod \"ad5b733a-89ba-4494-afd9-6994f402db46\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.026664 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sj5s2\" (UniqueName: \"kubernetes.io/projected/ad5b733a-89ba-4494-afd9-6994f402db46-kube-api-access-sj5s2\") pod \"ad5b733a-89ba-4494-afd9-6994f402db46\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.026815 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-config\") pod \"ad5b733a-89ba-4494-afd9-6994f402db46\" (UID: \"ad5b733a-89ba-4494-afd9-6994f402db46\") " Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.052146 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad5b733a-89ba-4494-afd9-6994f402db46-kube-api-access-sj5s2" (OuterVolumeSpecName: "kube-api-access-sj5s2") pod "ad5b733a-89ba-4494-afd9-6994f402db46" (UID: "ad5b733a-89ba-4494-afd9-6994f402db46"). InnerVolumeSpecName "kube-api-access-sj5s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.060636 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" event={"ID":"cb1d60dc-cf58-4298-bc3e-71eb70bcd64a","Type":"ContainerDied","Data":"05526583be2314597653ea73b0eb7c79f8fcbc2abbc6a43540a37e480db7ea1d"} Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.060704 4662 scope.go:117] "RemoveContainer" containerID="d3f68255021634c4146e2278302765a07e84fc8095578b4145531356ab266927" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.060843 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-jjmkp" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.069535 4662 generic.go:334] "Generic (PLEG): container finished" podID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerID="6bf4a74d45bccd42cd0815a91949d7e6da93663d9b9185b596b6666ae81c2cdd" exitCode=0 Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.069600 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zrgj8" event={"ID":"bd6ad89b-50ff-42c9-91a1-36f23be3568b","Type":"ContainerDied","Data":"6bf4a74d45bccd42cd0815a91949d7e6da93663d9b9185b596b6666ae81c2cdd"} Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.075202 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" event={"ID":"ad5b733a-89ba-4494-afd9-6994f402db46","Type":"ContainerDied","Data":"3e8d225ed6bb0f37b49beaf86809021c2ba8a30ea54a8e73ea7269faf8bb585c"} Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.075321 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-jzhfn" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.078022 4662 generic.go:334] "Generic (PLEG): container finished" podID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerID="b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2" exitCode=0 Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.078223 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" event={"ID":"6ab24c97-4d09-4090-8122-529e0d6d3d0b","Type":"ContainerDied","Data":"b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2"} Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.087369 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7stx" event={"ID":"7f968b2e-40d0-443c-acb6-931c8115e1eb","Type":"ContainerStarted","Data":"afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29"} Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.101098 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-5wz6p" event={"ID":"4760fc65-89b9-4df9-90ca-ab6e968955dd","Type":"ContainerStarted","Data":"79967dfff29f79f838cfae6a5a0d6c5ae5d28452c5eb037acb9b7e90bd8a1934"} Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.103772 4662 scope.go:117] "RemoveContainer" containerID="8408c085b611759042d544c3f52ff9719fed5c8acbd5b73b4aaa22552581142b" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.128562 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sj5s2\" (UniqueName: \"kubernetes.io/projected/ad5b733a-89ba-4494-afd9-6994f402db46-kube-api-access-sj5s2\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.130188 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m7stx" podStartSLOduration=3.500679088 podStartE2EDuration="7.130167528s" podCreationTimestamp="2025-12-08 09:30:09 +0000 UTC" firstStartedPulling="2025-12-08 09:30:11.901452765 +0000 UTC m=+935.470480755" lastFinishedPulling="2025-12-08 09:30:15.530941205 +0000 UTC m=+939.099969195" observedRunningTime="2025-12-08 09:30:16.120169237 +0000 UTC m=+939.689197237" watchObservedRunningTime="2025-12-08 09:30:16.130167528 +0000 UTC m=+939.699195518" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.133915 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ad5b733a-89ba-4494-afd9-6994f402db46" (UID: "ad5b733a-89ba-4494-afd9-6994f402db46"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.137714 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-config" (OuterVolumeSpecName: "config") pod "ad5b733a-89ba-4494-afd9-6994f402db46" (UID: "ad5b733a-89ba-4494-afd9-6994f402db46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:16 crc kubenswrapper[4662]: W1208 09:30:16.184152 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc55a4a3_d846_465b_914e_225c9ee2bfc5.slice/crio-f4363eabc1f0aeabf1ab748c45abb27c91d0114df5c257a1078016ef48ead8f0 WatchSource:0}: Error finding container f4363eabc1f0aeabf1ab748c45abb27c91d0114df5c257a1078016ef48ead8f0: Status 404 returned error can't find the container with id f4363eabc1f0aeabf1ab748c45abb27c91d0114df5c257a1078016ef48ead8f0 Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.184797 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.190608 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jjmkp"] Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.200922 4662 scope.go:117] "RemoveContainer" containerID="71b88bb322dd2e7be75b240514708f2fc0df1e50834d5d4752c8946dfdc17eba" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.201591 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-jjmkp"] Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.214426 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-5wz6p" podStartSLOduration=3.214406917 podStartE2EDuration="3.214406917s" podCreationTimestamp="2025-12-08 09:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:16.201322653 +0000 UTC m=+939.770350643" watchObservedRunningTime="2025-12-08 09:30:16.214406917 +0000 UTC m=+939.783434907" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.230748 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.230776 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad5b733a-89ba-4494-afd9-6994f402db46-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.298248 4662 scope.go:117] "RemoveContainer" containerID="8fd09dce72e03bef003ac41ad630758c1e0a33ab74aefc19c0812af4d36e11e4" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.337210 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.466116 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jzhfn"] Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.478780 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-jzhfn"] Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.712011 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" path="/var/lib/kubelet/pods/ad5b733a-89ba-4494-afd9-6994f402db46/volumes" Dec 08 09:30:16 crc kubenswrapper[4662]: I1208 09:30:16.712809 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" path="/var/lib/kubelet/pods/cb1d60dc-cf58-4298-bc3e-71eb70bcd64a/volumes" Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.110721 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zrgj8" event={"ID":"bd6ad89b-50ff-42c9-91a1-36f23be3568b","Type":"ContainerStarted","Data":"28ecba70f752a5a6e7b3f64fc9bdc43570a089977affc86fb0627755603f1fdb"} Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.111026 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.113776 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" event={"ID":"6ab24c97-4d09-4090-8122-529e0d6d3d0b","Type":"ContainerStarted","Data":"c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d"} Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.114481 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.116719 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fc55a4a3-d846-465b-914e-225c9ee2bfc5","Type":"ContainerStarted","Data":"f4363eabc1f0aeabf1ab748c45abb27c91d0114df5c257a1078016ef48ead8f0"} Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.130177 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-zrgj8" podStartSLOduration=4.130158014 podStartE2EDuration="4.130158014s" podCreationTimestamp="2025-12-08 09:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:17.127236345 +0000 UTC m=+940.696264335" watchObservedRunningTime="2025-12-08 09:30:17.130158014 +0000 UTC m=+940.699186024" Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.165457 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 08 09:30:17 crc kubenswrapper[4662]: I1208 09:30:17.166575 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" podStartSLOduration=4.166558469 podStartE2EDuration="4.166558469s" podCreationTimestamp="2025-12-08 09:30:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:17.151243864 +0000 UTC m=+940.720271854" watchObservedRunningTime="2025-12-08 09:30:17.166558469 +0000 UTC m=+940.735586449" Dec 08 09:30:18 crc kubenswrapper[4662]: I1208 09:30:18.127912 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fc55a4a3-d846-465b-914e-225c9ee2bfc5","Type":"ContainerStarted","Data":"accb30de68f0720a4cf584f2bfae469939641c98932e6d5f9c793fdfcba8d99a"} Dec 08 09:30:18 crc kubenswrapper[4662]: I1208 09:30:18.128254 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Dec 08 09:30:18 crc kubenswrapper[4662]: I1208 09:30:18.128269 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"fc55a4a3-d846-465b-914e-225c9ee2bfc5","Type":"ContainerStarted","Data":"c7edb26628139cfee08b104b17a0acebfb2e83ef6937e6f96972d5345094dbfb"} Dec 08 09:30:19 crc kubenswrapper[4662]: E1208 09:30:19.556661 4662 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3100ca_3241_444e_b279_248592e848fe.slice/crio-a5a070e3a238ea6d3d7dff0597294f9ecb6c180037e8a7d849fa2107c6d10ce8.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3100ca_3241_444e_b279_248592e848fe.slice/crio-conmon-a5a070e3a238ea6d3d7dff0597294f9ecb6c180037e8a7d849fa2107c6d10ce8.scope\": RecentStats: unable to find data in memory cache]" Dec 08 09:30:19 crc kubenswrapper[4662]: I1208 09:30:19.750216 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:19 crc kubenswrapper[4662]: I1208 09:30:19.750342 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:19 crc kubenswrapper[4662]: I1208 09:30:19.822955 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:19 crc kubenswrapper[4662]: I1208 09:30:19.844406 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=3.485119363 podStartE2EDuration="4.84438802s" podCreationTimestamp="2025-12-08 09:30:15 +0000 UTC" firstStartedPulling="2025-12-08 09:30:16.201075236 +0000 UTC m=+939.770103236" lastFinishedPulling="2025-12-08 09:30:17.560343903 +0000 UTC m=+941.129371893" observedRunningTime="2025-12-08 09:30:18.152957236 +0000 UTC m=+941.721985226" watchObservedRunningTime="2025-12-08 09:30:19.84438802 +0000 UTC m=+943.413416010" Dec 08 09:30:20 crc kubenswrapper[4662]: I1208 09:30:20.143913 4662 generic.go:334] "Generic (PLEG): container finished" podID="0b3100ca-3241-444e-b279-248592e848fe" containerID="a5a070e3a238ea6d3d7dff0597294f9ecb6c180037e8a7d849fa2107c6d10ce8" exitCode=0 Dec 08 09:30:20 crc kubenswrapper[4662]: I1208 09:30:20.144006 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3100ca-3241-444e-b279-248592e848fe","Type":"ContainerDied","Data":"a5a070e3a238ea6d3d7dff0597294f9ecb6c180037e8a7d849fa2107c6d10ce8"} Dec 08 09:30:20 crc kubenswrapper[4662]: I1208 09:30:20.234580 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:20 crc kubenswrapper[4662]: I1208 09:30:20.278492 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7stx"] Dec 08 09:30:21 crc kubenswrapper[4662]: I1208 09:30:21.156128 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"0b3100ca-3241-444e-b279-248592e848fe","Type":"ContainerStarted","Data":"d9f09e52f4b83c469edb1f5899ce7a6c650f1a61bfa57445b4e8d18c7c3402ac"} Dec 08 09:30:21 crc kubenswrapper[4662]: I1208 09:30:21.187044 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371987.667747 podStartE2EDuration="49.187028957s" podCreationTimestamp="2025-12-08 09:29:32 +0000 UTC" firstStartedPulling="2025-12-08 09:29:40.0389622 +0000 UTC m=+903.607990190" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:21.185378793 +0000 UTC m=+944.754406803" watchObservedRunningTime="2025-12-08 09:30:21.187028957 +0000 UTC m=+944.756056947" Dec 08 09:30:22 crc kubenswrapper[4662]: I1208 09:30:22.163651 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m7stx" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="registry-server" containerID="cri-o://afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29" gracePeriod=2 Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.132450 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.171848 4662 generic.go:334] "Generic (PLEG): container finished" podID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerID="afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29" exitCode=0 Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.171888 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7stx" event={"ID":"7f968b2e-40d0-443c-acb6-931c8115e1eb","Type":"ContainerDied","Data":"afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29"} Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.171913 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m7stx" event={"ID":"7f968b2e-40d0-443c-acb6-931c8115e1eb","Type":"ContainerDied","Data":"6d78ab1bf23af38d44b3c51bddc6def4ad71bd16a8ebc7e82feb4e0622f96c07"} Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.171928 4662 scope.go:117] "RemoveContainer" containerID="afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.172028 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m7stx" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.193838 4662 scope.go:117] "RemoveContainer" containerID="e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.223096 4662 scope.go:117] "RemoveContainer" containerID="a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.250634 4662 scope.go:117] "RemoveContainer" containerID="afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29" Dec 08 09:30:23 crc kubenswrapper[4662]: E1208 09:30:23.251794 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29\": container with ID starting with afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29 not found: ID does not exist" containerID="afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.251970 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29"} err="failed to get container status \"afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29\": rpc error: code = NotFound desc = could not find container \"afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29\": container with ID starting with afb12e05af18c68829b3f34f64da7c5e39f13d3e6dc2169ddba225af3c557d29 not found: ID does not exist" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.252078 4662 scope.go:117] "RemoveContainer" containerID="e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470" Dec 08 09:30:23 crc kubenswrapper[4662]: E1208 09:30:23.252690 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470\": container with ID starting with e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470 not found: ID does not exist" containerID="e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.252860 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470"} err="failed to get container status \"e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470\": rpc error: code = NotFound desc = could not find container \"e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470\": container with ID starting with e7a32116ab6a28133659d3010c64211b1d1f64b6650fb3db7b0ca9c86a202470 not found: ID does not exist" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.252888 4662 scope.go:117] "RemoveContainer" containerID="a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e" Dec 08 09:30:23 crc kubenswrapper[4662]: E1208 09:30:23.253274 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e\": container with ID starting with a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e not found: ID does not exist" containerID="a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.253306 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e"} err="failed to get container status \"a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e\": rpc error: code = NotFound desc = could not find container \"a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e\": container with ID starting with a261e53fab66b6d325f50cdfe0f4e5a252990c8027b24799ec871e5c386df22e not found: ID does not exist" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.254938 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-catalog-content\") pod \"7f968b2e-40d0-443c-acb6-931c8115e1eb\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.254983 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknzx\" (UniqueName: \"kubernetes.io/projected/7f968b2e-40d0-443c-acb6-931c8115e1eb-kube-api-access-wknzx\") pod \"7f968b2e-40d0-443c-acb6-931c8115e1eb\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.255143 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-utilities\") pod \"7f968b2e-40d0-443c-acb6-931c8115e1eb\" (UID: \"7f968b2e-40d0-443c-acb6-931c8115e1eb\") " Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.256025 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-utilities" (OuterVolumeSpecName: "utilities") pod "7f968b2e-40d0-443c-acb6-931c8115e1eb" (UID: "7f968b2e-40d0-443c-acb6-931c8115e1eb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.256470 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.261287 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f968b2e-40d0-443c-acb6-931c8115e1eb-kube-api-access-wknzx" (OuterVolumeSpecName: "kube-api-access-wknzx") pod "7f968b2e-40d0-443c-acb6-931c8115e1eb" (UID: "7f968b2e-40d0-443c-acb6-931c8115e1eb"). InnerVolumeSpecName "kube-api-access-wknzx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.321329 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f968b2e-40d0-443c-acb6-931c8115e1eb" (UID: "7f968b2e-40d0-443c-acb6-931c8115e1eb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.358281 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f968b2e-40d0-443c-acb6-931c8115e1eb-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.358316 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wknzx\" (UniqueName: \"kubernetes.io/projected/7f968b2e-40d0-443c-acb6-931c8115e1eb-kube-api-access-wknzx\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.507239 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m7stx"] Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.516544 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m7stx"] Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.646009 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.946821 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Dec 08 09:30:23 crc kubenswrapper[4662]: I1208 09:30:23.946883 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.186858 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.236008 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-ndntj"] Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.236234 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" podUID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerName="dnsmasq-dns" containerID="cri-o://c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d" gracePeriod=10 Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.417940 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.417979 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.490042 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.707612 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" path="/var/lib/kubelet/pods/7f968b2e-40d0-443c-acb6-931c8115e1eb/volumes" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.715208 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.811887 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-ovsdbserver-sb\") pod \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.811941 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-config\") pod \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.811962 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8bzlb\" (UniqueName: \"kubernetes.io/projected/6ab24c97-4d09-4090-8122-529e0d6d3d0b-kube-api-access-8bzlb\") pod \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.812047 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-dns-svc\") pod \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\" (UID: \"6ab24c97-4d09-4090-8122-529e0d6d3d0b\") " Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.822183 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ab24c97-4d09-4090-8122-529e0d6d3d0b-kube-api-access-8bzlb" (OuterVolumeSpecName: "kube-api-access-8bzlb") pod "6ab24c97-4d09-4090-8122-529e0d6d3d0b" (UID: "6ab24c97-4d09-4090-8122-529e0d6d3d0b"). InnerVolumeSpecName "kube-api-access-8bzlb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.866329 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6ab24c97-4d09-4090-8122-529e0d6d3d0b" (UID: "6ab24c97-4d09-4090-8122-529e0d6d3d0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.871589 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6ab24c97-4d09-4090-8122-529e0d6d3d0b" (UID: "6ab24c97-4d09-4090-8122-529e0d6d3d0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.872523 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-config" (OuterVolumeSpecName: "config") pod "6ab24c97-4d09-4090-8122-529e0d6d3d0b" (UID: "6ab24c97-4d09-4090-8122-529e0d6d3d0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.913213 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.913241 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.913250 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6ab24c97-4d09-4090-8122-529e0d6d3d0b-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:24 crc kubenswrapper[4662]: I1208 09:30:24.913259 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8bzlb\" (UniqueName: \"kubernetes.io/projected/6ab24c97-4d09-4090-8122-529e0d6d3d0b-kube-api-access-8bzlb\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.187565 4662 generic.go:334] "Generic (PLEG): container finished" podID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerID="c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d" exitCode=0 Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.188577 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.190829 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" event={"ID":"6ab24c97-4d09-4090-8122-529e0d6d3d0b","Type":"ContainerDied","Data":"c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d"} Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.190962 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-ndntj" event={"ID":"6ab24c97-4d09-4090-8122-529e0d6d3d0b","Type":"ContainerDied","Data":"5c998b9f72466de30789c0b84a988fe4adab328382973cb6b45e93c4c9342f40"} Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.191038 4662 scope.go:117] "RemoveContainer" containerID="c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.211825 4662 scope.go:117] "RemoveContainer" containerID="b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.227271 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-ndntj"] Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.234110 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-ndntj"] Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.250819 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.252483 4662 scope.go:117] "RemoveContainer" containerID="c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d" Dec 08 09:30:25 crc kubenswrapper[4662]: E1208 09:30:25.252854 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d\": container with ID starting with c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d not found: ID does not exist" containerID="c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.252881 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d"} err="failed to get container status \"c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d\": rpc error: code = NotFound desc = could not find container \"c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d\": container with ID starting with c4e81cf5fc93074b9f5671ec52c84db43ac87e7e2956d8714465e7bc7208e54d not found: ID does not exist" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.252901 4662 scope.go:117] "RemoveContainer" containerID="b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2" Dec 08 09:30:25 crc kubenswrapper[4662]: E1208 09:30:25.253149 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2\": container with ID starting with b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2 not found: ID does not exist" containerID="b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2" Dec 08 09:30:25 crc kubenswrapper[4662]: I1208 09:30:25.253177 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2"} err="failed to get container status \"b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2\": rpc error: code = NotFound desc = could not find container \"b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2\": container with ID starting with b944725616988b808a96bd6bc352730f83f170d05d25ab596d7b018c1ae48bc2 not found: ID does not exist" Dec 08 09:30:26 crc kubenswrapper[4662]: I1208 09:30:26.205803 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Dec 08 09:30:26 crc kubenswrapper[4662]: I1208 09:30:26.439537 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Dec 08 09:30:26 crc kubenswrapper[4662]: I1208 09:30:26.457524 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm7lz"] Dec 08 09:30:26 crc kubenswrapper[4662]: I1208 09:30:26.710407 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" path="/var/lib/kubelet/pods/6ab24c97-4d09-4090-8122-529e0d6d3d0b/volumes" Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.205966 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xm7lz" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="registry-server" containerID="cri-o://c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53" gracePeriod=2 Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.636675 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.764158 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wz86\" (UniqueName: \"kubernetes.io/projected/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-kube-api-access-5wz86\") pod \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.764283 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-utilities\") pod \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.764338 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-catalog-content\") pod \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\" (UID: \"83624e69-0eea-46dc-8f70-c5c9e87b9ca8\") " Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.769536 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-utilities" (OuterVolumeSpecName: "utilities") pod "83624e69-0eea-46dc-8f70-c5c9e87b9ca8" (UID: "83624e69-0eea-46dc-8f70-c5c9e87b9ca8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.770632 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-kube-api-access-5wz86" (OuterVolumeSpecName: "kube-api-access-5wz86") pod "83624e69-0eea-46dc-8f70-c5c9e87b9ca8" (UID: "83624e69-0eea-46dc-8f70-c5c9e87b9ca8"). InnerVolumeSpecName "kube-api-access-5wz86". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.812579 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "83624e69-0eea-46dc-8f70-c5c9e87b9ca8" (UID: "83624e69-0eea-46dc-8f70-c5c9e87b9ca8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.865966 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5wz86\" (UniqueName: \"kubernetes.io/projected/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-kube-api-access-5wz86\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.866003 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:27 crc kubenswrapper[4662]: I1208 09:30:27.866014 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/83624e69-0eea-46dc-8f70-c5c9e87b9ca8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.216670 4662 generic.go:334] "Generic (PLEG): container finished" podID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerID="c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53" exitCode=0 Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.216716 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm7lz" event={"ID":"83624e69-0eea-46dc-8f70-c5c9e87b9ca8","Type":"ContainerDied","Data":"c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53"} Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.216763 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xm7lz" event={"ID":"83624e69-0eea-46dc-8f70-c5c9e87b9ca8","Type":"ContainerDied","Data":"657d5832cbe6735a4b01817124ed8b4cd87a9f12f6fbb99d52fbe3ef0b5f896f"} Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.216787 4662 scope.go:117] "RemoveContainer" containerID="c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.216920 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xm7lz" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.245248 4662 scope.go:117] "RemoveContainer" containerID="631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.259615 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xm7lz"] Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.266459 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xm7lz"] Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.278302 4662 scope.go:117] "RemoveContainer" containerID="b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.309999 4662 scope.go:117] "RemoveContainer" containerID="c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53" Dec 08 09:30:28 crc kubenswrapper[4662]: E1208 09:30:28.310760 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53\": container with ID starting with c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53 not found: ID does not exist" containerID="c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.310796 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53"} err="failed to get container status \"c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53\": rpc error: code = NotFound desc = could not find container \"c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53\": container with ID starting with c2fad0920dd0755c1225c23853f917a485500b5d49cc96cc05e3d6af3f3aef53 not found: ID does not exist" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.310817 4662 scope.go:117] "RemoveContainer" containerID="631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6" Dec 08 09:30:28 crc kubenswrapper[4662]: E1208 09:30:28.311076 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6\": container with ID starting with 631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6 not found: ID does not exist" containerID="631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.311217 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6"} err="failed to get container status \"631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6\": rpc error: code = NotFound desc = could not find container \"631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6\": container with ID starting with 631de6a6923a9e3ae789028900a3b15dac4bbe64b23e0b426b5e2c9fc5e9cbf6 not found: ID does not exist" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.311251 4662 scope.go:117] "RemoveContainer" containerID="b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e" Dec 08 09:30:28 crc kubenswrapper[4662]: E1208 09:30:28.311527 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e\": container with ID starting with b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e not found: ID does not exist" containerID="b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.311555 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e"} err="failed to get container status \"b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e\": rpc error: code = NotFound desc = could not find container \"b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e\": container with ID starting with b48bdbcc7d78a4d260680062c03094878b9c65574e555fd250c3611d6666ea4e not found: ID does not exist" Dec 08 09:30:28 crc kubenswrapper[4662]: I1208 09:30:28.711152 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" path="/var/lib/kubelet/pods/83624e69-0eea-46dc-8f70-c5c9e87b9ca8/volumes" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.659653 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.832635 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-9489-account-create-update-pswtb"] Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.832946 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerName="init" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.832958 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerName="init" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.832971 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.832978 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.832987 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="extract-content" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.832994 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="extract-content" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833006 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="extract-utilities" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833013 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="extract-utilities" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833025 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="extract-utilities" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833031 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="extract-utilities" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833047 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="registry-server" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833053 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="registry-server" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833070 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833077 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833093 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerName="init" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833101 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerName="init" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833110 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" containerName="init" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833117 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" containerName="init" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833130 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833136 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833144 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="extract-content" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833150 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="extract-content" Dec 08 09:30:30 crc kubenswrapper[4662]: E1208 09:30:30.833158 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="registry-server" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833163 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="registry-server" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833318 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f968b2e-40d0-443c-acb6-931c8115e1eb" containerName="registry-server" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833334 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ab24c97-4d09-4090-8122-529e0d6d3d0b" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833395 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb1d60dc-cf58-4298-bc3e-71eb70bcd64a" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833406 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad5b733a-89ba-4494-afd9-6994f402db46" containerName="dnsmasq-dns" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833414 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="83624e69-0eea-46dc-8f70-c5c9e87b9ca8" containerName="registry-server" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.833932 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.841695 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.862813 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9489-account-create-update-pswtb"] Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.888448 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-gtrsb"] Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.889377 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:30 crc kubenswrapper[4662]: I1208 09:30:30.912841 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gtrsb"] Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.029012 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-operator-scripts\") pod \"glance-9489-account-create-update-pswtb\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.029071 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npvhd\" (UniqueName: \"kubernetes.io/projected/6447eae0-9785-4301-9b23-ea37ecd5317b-kube-api-access-npvhd\") pod \"glance-db-create-gtrsb\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.029290 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zmcl\" (UniqueName: \"kubernetes.io/projected/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-kube-api-access-5zmcl\") pod \"glance-9489-account-create-update-pswtb\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.029419 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6447eae0-9785-4301-9b23-ea37ecd5317b-operator-scripts\") pod \"glance-db-create-gtrsb\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.131156 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zmcl\" (UniqueName: \"kubernetes.io/projected/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-kube-api-access-5zmcl\") pod \"glance-9489-account-create-update-pswtb\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.131222 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6447eae0-9785-4301-9b23-ea37ecd5317b-operator-scripts\") pod \"glance-db-create-gtrsb\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.131277 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-operator-scripts\") pod \"glance-9489-account-create-update-pswtb\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.131302 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npvhd\" (UniqueName: \"kubernetes.io/projected/6447eae0-9785-4301-9b23-ea37ecd5317b-kube-api-access-npvhd\") pod \"glance-db-create-gtrsb\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.132055 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6447eae0-9785-4301-9b23-ea37ecd5317b-operator-scripts\") pod \"glance-db-create-gtrsb\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.132088 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-operator-scripts\") pod \"glance-9489-account-create-update-pswtb\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.149500 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npvhd\" (UniqueName: \"kubernetes.io/projected/6447eae0-9785-4301-9b23-ea37ecd5317b-kube-api-access-npvhd\") pod \"glance-db-create-gtrsb\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.150888 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zmcl\" (UniqueName: \"kubernetes.io/projected/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-kube-api-access-5zmcl\") pod \"glance-9489-account-create-update-pswtb\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.163242 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.216528 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.669321 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-9489-account-create-update-pswtb"] Dec 08 09:30:31 crc kubenswrapper[4662]: W1208 09:30:31.675378 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod54617dbd_843e_4d2e_bae8_f2435e3d7ad3.slice/crio-b48c489c44fdd48dac923a6ce070bac2bca4b064c88379a9fb0ac9626e36be7b WatchSource:0}: Error finding container b48c489c44fdd48dac923a6ce070bac2bca4b064c88379a9fb0ac9626e36be7b: Status 404 returned error can't find the container with id b48c489c44fdd48dac923a6ce070bac2bca4b064c88379a9fb0ac9626e36be7b Dec 08 09:30:31 crc kubenswrapper[4662]: I1208 09:30:31.752334 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-gtrsb"] Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.247956 4662 generic.go:334] "Generic (PLEG): container finished" podID="54617dbd-843e-4d2e-bae8-f2435e3d7ad3" containerID="a1f2ac266490b81ee4e7884bd29c2a4bf03a8dea8a4f37e94f63c90fc662992d" exitCode=0 Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.248027 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9489-account-create-update-pswtb" event={"ID":"54617dbd-843e-4d2e-bae8-f2435e3d7ad3","Type":"ContainerDied","Data":"a1f2ac266490b81ee4e7884bd29c2a4bf03a8dea8a4f37e94f63c90fc662992d"} Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.248053 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9489-account-create-update-pswtb" event={"ID":"54617dbd-843e-4d2e-bae8-f2435e3d7ad3","Type":"ContainerStarted","Data":"b48c489c44fdd48dac923a6ce070bac2bca4b064c88379a9fb0ac9626e36be7b"} Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.249127 4662 generic.go:334] "Generic (PLEG): container finished" podID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerID="b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881" exitCode=0 Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.249259 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9f9be7d-4423-489a-a794-e022a83c9e51","Type":"ContainerDied","Data":"b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881"} Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.251863 4662 generic.go:334] "Generic (PLEG): container finished" podID="6447eae0-9785-4301-9b23-ea37ecd5317b" containerID="325e7cc3c5b334b9c9056447d1db7b418f2abf0d8b5f6eea86e564b8f8cdf2e4" exitCode=0 Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.251903 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gtrsb" event={"ID":"6447eae0-9785-4301-9b23-ea37ecd5317b","Type":"ContainerDied","Data":"325e7cc3c5b334b9c9056447d1db7b418f2abf0d8b5f6eea86e564b8f8cdf2e4"} Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.251956 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gtrsb" event={"ID":"6447eae0-9785-4301-9b23-ea37ecd5317b","Type":"ContainerStarted","Data":"bd5ee3f7884439749036eafe9cad15f17f462c43bd0353633d106190989748b7"} Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.611638 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.611972 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.612012 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.612660 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d04de701f63b5c2e4111b66668ec4560be524ad9596aef41adf5fe2ab05b3e40"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:30:32 crc kubenswrapper[4662]: I1208 09:30:32.612714 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://d04de701f63b5c2e4111b66668ec4560be524ad9596aef41adf5fe2ab05b3e40" gracePeriod=600 Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.261330 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9f9be7d-4423-489a-a794-e022a83c9e51","Type":"ContainerStarted","Data":"3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654"} Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.261811 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.265796 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="d04de701f63b5c2e4111b66668ec4560be524ad9596aef41adf5fe2ab05b3e40" exitCode=0 Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.265949 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"d04de701f63b5c2e4111b66668ec4560be524ad9596aef41adf5fe2ab05b3e40"} Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.265975 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"1a38cb2745e5efa64e1f916b4601a82b9010af6b77998742afd727eba45f786c"} Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.265993 4662 scope.go:117] "RemoveContainer" containerID="d950f79d0061a93dd2f9e3d3caab4b8f10f8ead0de736eba822a73ae528aea9e" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.291956 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.441844437 podStartE2EDuration="1m2.291936268s" podCreationTimestamp="2025-12-08 09:29:31 +0000 UTC" firstStartedPulling="2025-12-08 09:29:34.289582003 +0000 UTC m=+897.858609993" lastFinishedPulling="2025-12-08 09:29:58.139673834 +0000 UTC m=+921.708701824" observedRunningTime="2025-12-08 09:30:33.284189688 +0000 UTC m=+956.853217698" watchObservedRunningTime="2025-12-08 09:30:33.291936268 +0000 UTC m=+956.860964258" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.674017 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.680058 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.774941 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zmcl\" (UniqueName: \"kubernetes.io/projected/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-kube-api-access-5zmcl\") pod \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.774992 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-operator-scripts\") pod \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\" (UID: \"54617dbd-843e-4d2e-bae8-f2435e3d7ad3\") " Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.775053 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6447eae0-9785-4301-9b23-ea37ecd5317b-operator-scripts\") pod \"6447eae0-9785-4301-9b23-ea37ecd5317b\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.775088 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npvhd\" (UniqueName: \"kubernetes.io/projected/6447eae0-9785-4301-9b23-ea37ecd5317b-kube-api-access-npvhd\") pod \"6447eae0-9785-4301-9b23-ea37ecd5317b\" (UID: \"6447eae0-9785-4301-9b23-ea37ecd5317b\") " Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.775406 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54617dbd-843e-4d2e-bae8-f2435e3d7ad3" (UID: "54617dbd-843e-4d2e-bae8-f2435e3d7ad3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.775776 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.776665 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6447eae0-9785-4301-9b23-ea37ecd5317b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6447eae0-9785-4301-9b23-ea37ecd5317b" (UID: "6447eae0-9785-4301-9b23-ea37ecd5317b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.780618 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-kube-api-access-5zmcl" (OuterVolumeSpecName: "kube-api-access-5zmcl") pod "54617dbd-843e-4d2e-bae8-f2435e3d7ad3" (UID: "54617dbd-843e-4d2e-bae8-f2435e3d7ad3"). InnerVolumeSpecName "kube-api-access-5zmcl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.796885 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6447eae0-9785-4301-9b23-ea37ecd5317b-kube-api-access-npvhd" (OuterVolumeSpecName: "kube-api-access-npvhd") pod "6447eae0-9785-4301-9b23-ea37ecd5317b" (UID: "6447eae0-9785-4301-9b23-ea37ecd5317b"). InnerVolumeSpecName "kube-api-access-npvhd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.877472 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zmcl\" (UniqueName: \"kubernetes.io/projected/54617dbd-843e-4d2e-bae8-f2435e3d7ad3-kube-api-access-5zmcl\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.877519 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6447eae0-9785-4301-9b23-ea37ecd5317b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:33 crc kubenswrapper[4662]: I1208 09:30:33.877530 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npvhd\" (UniqueName: \"kubernetes.io/projected/6447eae0-9785-4301-9b23-ea37ecd5317b-kube-api-access-npvhd\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:34 crc kubenswrapper[4662]: I1208 09:30:34.278259 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-gtrsb" Dec 08 09:30:34 crc kubenswrapper[4662]: I1208 09:30:34.278259 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-gtrsb" event={"ID":"6447eae0-9785-4301-9b23-ea37ecd5317b","Type":"ContainerDied","Data":"bd5ee3f7884439749036eafe9cad15f17f462c43bd0353633d106190989748b7"} Dec 08 09:30:34 crc kubenswrapper[4662]: I1208 09:30:34.278826 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5ee3f7884439749036eafe9cad15f17f462c43bd0353633d106190989748b7" Dec 08 09:30:34 crc kubenswrapper[4662]: I1208 09:30:34.279943 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-9489-account-create-update-pswtb" Dec 08 09:30:34 crc kubenswrapper[4662]: I1208 09:30:34.279935 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-9489-account-create-update-pswtb" event={"ID":"54617dbd-843e-4d2e-bae8-f2435e3d7ad3","Type":"ContainerDied","Data":"b48c489c44fdd48dac923a6ce070bac2bca4b064c88379a9fb0ac9626e36be7b"} Dec 08 09:30:34 crc kubenswrapper[4662]: I1208 09:30:34.279980 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48c489c44fdd48dac923a6ce070bac2bca4b064c88379a9fb0ac9626e36be7b" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.142409 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-vh2cr"] Dec 08 09:30:35 crc kubenswrapper[4662]: E1208 09:30:35.142987 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54617dbd-843e-4d2e-bae8-f2435e3d7ad3" containerName="mariadb-account-create-update" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.143009 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="54617dbd-843e-4d2e-bae8-f2435e3d7ad3" containerName="mariadb-account-create-update" Dec 08 09:30:35 crc kubenswrapper[4662]: E1208 09:30:35.143027 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6447eae0-9785-4301-9b23-ea37ecd5317b" containerName="mariadb-database-create" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.143035 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="6447eae0-9785-4301-9b23-ea37ecd5317b" containerName="mariadb-database-create" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.143209 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="6447eae0-9785-4301-9b23-ea37ecd5317b" containerName="mariadb-database-create" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.143240 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="54617dbd-843e-4d2e-bae8-f2435e3d7ad3" containerName="mariadb-account-create-update" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.143717 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.161593 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vh2cr"] Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.198608 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05fb17-8432-4eaa-a64c-a1255b17bdce-operator-scripts\") pod \"keystone-db-create-vh2cr\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.198695 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxcr\" (UniqueName: \"kubernetes.io/projected/4c05fb17-8432-4eaa-a64c-a1255b17bdce-kube-api-access-ntxcr\") pod \"keystone-db-create-vh2cr\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.300251 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05fb17-8432-4eaa-a64c-a1255b17bdce-operator-scripts\") pod \"keystone-db-create-vh2cr\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.300329 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxcr\" (UniqueName: \"kubernetes.io/projected/4c05fb17-8432-4eaa-a64c-a1255b17bdce-kube-api-access-ntxcr\") pod \"keystone-db-create-vh2cr\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.301337 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05fb17-8432-4eaa-a64c-a1255b17bdce-operator-scripts\") pod \"keystone-db-create-vh2cr\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.327881 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-c54f-account-create-update-zz4hp"] Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.334110 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.340509 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.347222 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c54f-account-create-update-zz4hp"] Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.355661 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxcr\" (UniqueName: \"kubernetes.io/projected/4c05fb17-8432-4eaa-a64c-a1255b17bdce-kube-api-access-ntxcr\") pod \"keystone-db-create-vh2cr\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.405600 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxwcs\" (UniqueName: \"kubernetes.io/projected/355ee0bb-eab4-4a74-b65d-591809af525a-kube-api-access-jxwcs\") pod \"keystone-c54f-account-create-update-zz4hp\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.405690 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355ee0bb-eab4-4a74-b65d-591809af525a-operator-scripts\") pod \"keystone-c54f-account-create-update-zz4hp\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.464451 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.507355 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355ee0bb-eab4-4a74-b65d-591809af525a-operator-scripts\") pod \"keystone-c54f-account-create-update-zz4hp\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.507814 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxwcs\" (UniqueName: \"kubernetes.io/projected/355ee0bb-eab4-4a74-b65d-591809af525a-kube-api-access-jxwcs\") pod \"keystone-c54f-account-create-update-zz4hp\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.509082 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355ee0bb-eab4-4a74-b65d-591809af525a-operator-scripts\") pod \"keystone-c54f-account-create-update-zz4hp\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.561487 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-k7kv5"] Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.563108 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.574458 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxwcs\" (UniqueName: \"kubernetes.io/projected/355ee0bb-eab4-4a74-b65d-591809af525a-kube-api-access-jxwcs\") pod \"keystone-c54f-account-create-update-zz4hp\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.575708 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k7kv5"] Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.609669 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be309358-6c47-42fb-b818-014b339d4a53-operator-scripts\") pod \"placement-db-create-k7kv5\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.609716 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hskg9\" (UniqueName: \"kubernetes.io/projected/be309358-6c47-42fb-b818-014b339d4a53-kube-api-access-hskg9\") pod \"placement-db-create-k7kv5\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.688230 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.701206 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-72f8-account-create-update-ddjnb"] Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.702277 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.704803 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.711821 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hskg9\" (UniqueName: \"kubernetes.io/projected/be309358-6c47-42fb-b818-014b339d4a53-kube-api-access-hskg9\") pod \"placement-db-create-k7kv5\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.712225 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be309358-6c47-42fb-b818-014b339d4a53-operator-scripts\") pod \"placement-db-create-k7kv5\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.712622 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72f8-account-create-update-ddjnb"] Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.713451 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be309358-6c47-42fb-b818-014b339d4a53-operator-scripts\") pod \"placement-db-create-k7kv5\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.752699 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hskg9\" (UniqueName: \"kubernetes.io/projected/be309358-6c47-42fb-b818-014b339d4a53-kube-api-access-hskg9\") pod \"placement-db-create-k7kv5\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.814087 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9svpb\" (UniqueName: \"kubernetes.io/projected/1faf9f8b-598e-4015-ae89-1e289ef305d9-kube-api-access-9svpb\") pod \"placement-72f8-account-create-update-ddjnb\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.814121 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1faf9f8b-598e-4015-ae89-1e289ef305d9-operator-scripts\") pod \"placement-72f8-account-create-update-ddjnb\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.912228 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.915444 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9svpb\" (UniqueName: \"kubernetes.io/projected/1faf9f8b-598e-4015-ae89-1e289ef305d9-kube-api-access-9svpb\") pod \"placement-72f8-account-create-update-ddjnb\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.915499 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1faf9f8b-598e-4015-ae89-1e289ef305d9-operator-scripts\") pod \"placement-72f8-account-create-update-ddjnb\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.916386 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1faf9f8b-598e-4015-ae89-1e289ef305d9-operator-scripts\") pod \"placement-72f8-account-create-update-ddjnb\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:35 crc kubenswrapper[4662]: I1208 09:30:35.931915 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9svpb\" (UniqueName: \"kubernetes.io/projected/1faf9f8b-598e-4015-ae89-1e289ef305d9-kube-api-access-9svpb\") pod \"placement-72f8-account-create-update-ddjnb\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.041635 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-vh2cr"] Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.062140 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.201821 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bkngw"] Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.202832 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.211529 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-zbs5z" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.211710 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.244851 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-c54f-account-create-update-zz4hp"] Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.336570 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-db-sync-config-data\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.336811 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-config-data\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.336866 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-combined-ca-bundle\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.336907 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf2sb\" (UniqueName: \"kubernetes.io/projected/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-kube-api-access-pf2sb\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.337000 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bkngw"] Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.342826 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vh2cr" event={"ID":"4c05fb17-8432-4eaa-a64c-a1255b17bdce","Type":"ContainerStarted","Data":"a2f01fdd8118df877f22973fda5439bb660a22fff8661dc3c0e1577d0c3f2e71"} Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.363999 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tx8rx" podUID="4f875ff2-9f06-470b-89dd-2f6215a7e40c" containerName="ovn-controller" probeResult="failure" output=< Dec 08 09:30:36 crc kubenswrapper[4662]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 08 09:30:36 crc kubenswrapper[4662]: > Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.440646 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-combined-ca-bundle\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.440714 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf2sb\" (UniqueName: \"kubernetes.io/projected/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-kube-api-access-pf2sb\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.440795 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-db-sync-config-data\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.440824 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-config-data\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.452686 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-config-data\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.453168 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-combined-ca-bundle\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.453486 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-db-sync-config-data\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.479253 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf2sb\" (UniqueName: \"kubernetes.io/projected/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-kube-api-access-pf2sb\") pod \"glance-db-sync-bkngw\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.537573 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-k7kv5"] Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.573219 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bkngw" Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.748875 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-72f8-account-create-update-ddjnb"] Dec 08 09:30:36 crc kubenswrapper[4662]: I1208 09:30:36.804790 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.093240 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bkngw"] Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.359722 4662 generic.go:334] "Generic (PLEG): container finished" podID="355ee0bb-eab4-4a74-b65d-591809af525a" containerID="7a7f9377c6401cc5a226d488de2b8f11c315ae55a5d8b50d5b570c8921dddc52" exitCode=0 Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.359880 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c54f-account-create-update-zz4hp" event={"ID":"355ee0bb-eab4-4a74-b65d-591809af525a","Type":"ContainerDied","Data":"7a7f9377c6401cc5a226d488de2b8f11c315ae55a5d8b50d5b570c8921dddc52"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.359945 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c54f-account-create-update-zz4hp" event={"ID":"355ee0bb-eab4-4a74-b65d-591809af525a","Type":"ContainerStarted","Data":"0d1b8ece1f00af4a258e3f85e1e9f54e67c69938ec99464d5e5b309c8015dbbb"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.373000 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72f8-account-create-update-ddjnb" event={"ID":"1faf9f8b-598e-4015-ae89-1e289ef305d9","Type":"ContainerStarted","Data":"17a0de156814f21d842431fd8b301bf727b7a0c13d1bc52e550807d6fd995e39"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.373048 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72f8-account-create-update-ddjnb" event={"ID":"1faf9f8b-598e-4015-ae89-1e289ef305d9","Type":"ContainerStarted","Data":"50353bcbdfea69ca85d9a5ae1453cef916cfc5d0351dc0ca7363bb9da5986df3"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.376510 4662 generic.go:334] "Generic (PLEG): container finished" podID="be309358-6c47-42fb-b818-014b339d4a53" containerID="1c50b92edc817a5c8cf63eb760f971f87c8fbc9154f36e8508ac8fa546731fbe" exitCode=0 Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.376561 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k7kv5" event={"ID":"be309358-6c47-42fb-b818-014b339d4a53","Type":"ContainerDied","Data":"1c50b92edc817a5c8cf63eb760f971f87c8fbc9154f36e8508ac8fa546731fbe"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.376609 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k7kv5" event={"ID":"be309358-6c47-42fb-b818-014b339d4a53","Type":"ContainerStarted","Data":"357fb700568c222d6be2106a52158f0edcdf354ac8ed7af0d316f356904adf76"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.380529 4662 generic.go:334] "Generic (PLEG): container finished" podID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerID="3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c" exitCode=0 Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.380672 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9b3e5a2-0303-435d-9bd7-763b2f802e46","Type":"ContainerDied","Data":"3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.386408 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bkngw" event={"ID":"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e","Type":"ContainerStarted","Data":"8a9531c09dad394e5d8f4ffa6726377098e60b8c30ed6647fadf40ec4918dc2b"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.388365 4662 generic.go:334] "Generic (PLEG): container finished" podID="4c05fb17-8432-4eaa-a64c-a1255b17bdce" containerID="d0fa5c4d4187dc0dd9998a0a6fb189d7eea1cf8f447e07b923578cbeee487553" exitCode=0 Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.388411 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vh2cr" event={"ID":"4c05fb17-8432-4eaa-a64c-a1255b17bdce","Type":"ContainerDied","Data":"d0fa5c4d4187dc0dd9998a0a6fb189d7eea1cf8f447e07b923578cbeee487553"} Dec 08 09:30:37 crc kubenswrapper[4662]: I1208 09:30:37.515914 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-72f8-account-create-update-ddjnb" podStartSLOduration=2.515899132 podStartE2EDuration="2.515899132s" podCreationTimestamp="2025-12-08 09:30:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:37.482506389 +0000 UTC m=+961.051534379" watchObservedRunningTime="2025-12-08 09:30:37.515899132 +0000 UTC m=+961.084927122" Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.397966 4662 generic.go:334] "Generic (PLEG): container finished" podID="1faf9f8b-598e-4015-ae89-1e289ef305d9" containerID="17a0de156814f21d842431fd8b301bf727b7a0c13d1bc52e550807d6fd995e39" exitCode=0 Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.398245 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72f8-account-create-update-ddjnb" event={"ID":"1faf9f8b-598e-4015-ae89-1e289ef305d9","Type":"ContainerDied","Data":"17a0de156814f21d842431fd8b301bf727b7a0c13d1bc52e550807d6fd995e39"} Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.400645 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9b3e5a2-0303-435d-9bd7-763b2f802e46","Type":"ContainerStarted","Data":"49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60"} Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.401384 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.450488 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=-9223371968.40431 podStartE2EDuration="1m8.450465338s" podCreationTimestamp="2025-12-08 09:29:30 +0000 UTC" firstStartedPulling="2025-12-08 09:29:32.677303955 +0000 UTC m=+896.246331945" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:38.44239254 +0000 UTC m=+962.011420550" watchObservedRunningTime="2025-12-08 09:30:38.450465338 +0000 UTC m=+962.019493328" Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.969555 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.975037 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:38 crc kubenswrapper[4662]: I1208 09:30:38.998721 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.027397 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05fb17-8432-4eaa-a64c-a1255b17bdce-operator-scripts\") pod \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.027599 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxwcs\" (UniqueName: \"kubernetes.io/projected/355ee0bb-eab4-4a74-b65d-591809af525a-kube-api-access-jxwcs\") pod \"355ee0bb-eab4-4a74-b65d-591809af525a\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.027632 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntxcr\" (UniqueName: \"kubernetes.io/projected/4c05fb17-8432-4eaa-a64c-a1255b17bdce-kube-api-access-ntxcr\") pod \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\" (UID: \"4c05fb17-8432-4eaa-a64c-a1255b17bdce\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.027661 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355ee0bb-eab4-4a74-b65d-591809af525a-operator-scripts\") pod \"355ee0bb-eab4-4a74-b65d-591809af525a\" (UID: \"355ee0bb-eab4-4a74-b65d-591809af525a\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.028301 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c05fb17-8432-4eaa-a64c-a1255b17bdce-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c05fb17-8432-4eaa-a64c-a1255b17bdce" (UID: "4c05fb17-8432-4eaa-a64c-a1255b17bdce"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.028478 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/355ee0bb-eab4-4a74-b65d-591809af525a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "355ee0bb-eab4-4a74-b65d-591809af525a" (UID: "355ee0bb-eab4-4a74-b65d-591809af525a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.050521 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c05fb17-8432-4eaa-a64c-a1255b17bdce-kube-api-access-ntxcr" (OuterVolumeSpecName: "kube-api-access-ntxcr") pod "4c05fb17-8432-4eaa-a64c-a1255b17bdce" (UID: "4c05fb17-8432-4eaa-a64c-a1255b17bdce"). InnerVolumeSpecName "kube-api-access-ntxcr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.050593 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/355ee0bb-eab4-4a74-b65d-591809af525a-kube-api-access-jxwcs" (OuterVolumeSpecName: "kube-api-access-jxwcs") pod "355ee0bb-eab4-4a74-b65d-591809af525a" (UID: "355ee0bb-eab4-4a74-b65d-591809af525a"). InnerVolumeSpecName "kube-api-access-jxwcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.129460 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be309358-6c47-42fb-b818-014b339d4a53-operator-scripts\") pod \"be309358-6c47-42fb-b818-014b339d4a53\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.129525 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hskg9\" (UniqueName: \"kubernetes.io/projected/be309358-6c47-42fb-b818-014b339d4a53-kube-api-access-hskg9\") pod \"be309358-6c47-42fb-b818-014b339d4a53\" (UID: \"be309358-6c47-42fb-b818-014b339d4a53\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.129862 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxwcs\" (UniqueName: \"kubernetes.io/projected/355ee0bb-eab4-4a74-b65d-591809af525a-kube-api-access-jxwcs\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.129880 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntxcr\" (UniqueName: \"kubernetes.io/projected/4c05fb17-8432-4eaa-a64c-a1255b17bdce-kube-api-access-ntxcr\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.129890 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/355ee0bb-eab4-4a74-b65d-591809af525a-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.129899 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c05fb17-8432-4eaa-a64c-a1255b17bdce-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.130012 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be309358-6c47-42fb-b818-014b339d4a53-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be309358-6c47-42fb-b818-014b339d4a53" (UID: "be309358-6c47-42fb-b818-014b339d4a53"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.133005 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be309358-6c47-42fb-b818-014b339d4a53-kube-api-access-hskg9" (OuterVolumeSpecName: "kube-api-access-hskg9") pod "be309358-6c47-42fb-b818-014b339d4a53" (UID: "be309358-6c47-42fb-b818-014b339d4a53"). InnerVolumeSpecName "kube-api-access-hskg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.231453 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be309358-6c47-42fb-b818-014b339d4a53-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.231486 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hskg9\" (UniqueName: \"kubernetes.io/projected/be309358-6c47-42fb-b818-014b339d4a53-kube-api-access-hskg9\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.418173 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-vh2cr" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.418188 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-vh2cr" event={"ID":"4c05fb17-8432-4eaa-a64c-a1255b17bdce","Type":"ContainerDied","Data":"a2f01fdd8118df877f22973fda5439bb660a22fff8661dc3c0e1577d0c3f2e71"} Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.418239 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2f01fdd8118df877f22973fda5439bb660a22fff8661dc3c0e1577d0c3f2e71" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.420659 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-c54f-account-create-update-zz4hp" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.420656 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-c54f-account-create-update-zz4hp" event={"ID":"355ee0bb-eab4-4a74-b65d-591809af525a","Type":"ContainerDied","Data":"0d1b8ece1f00af4a258e3f85e1e9f54e67c69938ec99464d5e5b309c8015dbbb"} Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.420778 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d1b8ece1f00af4a258e3f85e1e9f54e67c69938ec99464d5e5b309c8015dbbb" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.423200 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-k7kv5" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.426554 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-k7kv5" event={"ID":"be309358-6c47-42fb-b818-014b339d4a53","Type":"ContainerDied","Data":"357fb700568c222d6be2106a52158f0edcdf354ac8ed7af0d316f356904adf76"} Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.426608 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="357fb700568c222d6be2106a52158f0edcdf354ac8ed7af0d316f356904adf76" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.806133 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.945375 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9svpb\" (UniqueName: \"kubernetes.io/projected/1faf9f8b-598e-4015-ae89-1e289ef305d9-kube-api-access-9svpb\") pod \"1faf9f8b-598e-4015-ae89-1e289ef305d9\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.945484 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1faf9f8b-598e-4015-ae89-1e289ef305d9-operator-scripts\") pod \"1faf9f8b-598e-4015-ae89-1e289ef305d9\" (UID: \"1faf9f8b-598e-4015-ae89-1e289ef305d9\") " Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.946625 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1faf9f8b-598e-4015-ae89-1e289ef305d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1faf9f8b-598e-4015-ae89-1e289ef305d9" (UID: "1faf9f8b-598e-4015-ae89-1e289ef305d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:39 crc kubenswrapper[4662]: I1208 09:30:39.973292 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1faf9f8b-598e-4015-ae89-1e289ef305d9-kube-api-access-9svpb" (OuterVolumeSpecName: "kube-api-access-9svpb") pod "1faf9f8b-598e-4015-ae89-1e289ef305d9" (UID: "1faf9f8b-598e-4015-ae89-1e289ef305d9"). InnerVolumeSpecName "kube-api-access-9svpb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:40 crc kubenswrapper[4662]: I1208 09:30:40.047129 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9svpb\" (UniqueName: \"kubernetes.io/projected/1faf9f8b-598e-4015-ae89-1e289ef305d9-kube-api-access-9svpb\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:40 crc kubenswrapper[4662]: I1208 09:30:40.047337 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1faf9f8b-598e-4015-ae89-1e289ef305d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:40 crc kubenswrapper[4662]: I1208 09:30:40.432863 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-72f8-account-create-update-ddjnb" event={"ID":"1faf9f8b-598e-4015-ae89-1e289ef305d9","Type":"ContainerDied","Data":"50353bcbdfea69ca85d9a5ae1453cef916cfc5d0351dc0ca7363bb9da5986df3"} Dec 08 09:30:40 crc kubenswrapper[4662]: I1208 09:30:40.432894 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50353bcbdfea69ca85d9a5ae1453cef916cfc5d0351dc0ca7363bb9da5986df3" Dec 08 09:30:40 crc kubenswrapper[4662]: I1208 09:30:40.432910 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-72f8-account-create-update-ddjnb" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.282473 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-tx8rx" podUID="4f875ff2-9f06-470b-89dd-2f6215a7e40c" containerName="ovn-controller" probeResult="failure" output=< Dec 08 09:30:41 crc kubenswrapper[4662]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Dec 08 09:30:41 crc kubenswrapper[4662]: > Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.310033 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.359630 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-rbsxq" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.589605 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-tx8rx-config-fgpkf"] Dec 08 09:30:41 crc kubenswrapper[4662]: E1208 09:30:41.590794 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c05fb17-8432-4eaa-a64c-a1255b17bdce" containerName="mariadb-database-create" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.590885 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c05fb17-8432-4eaa-a64c-a1255b17bdce" containerName="mariadb-database-create" Dec 08 09:30:41 crc kubenswrapper[4662]: E1208 09:30:41.590975 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be309358-6c47-42fb-b818-014b339d4a53" containerName="mariadb-database-create" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.591030 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="be309358-6c47-42fb-b818-014b339d4a53" containerName="mariadb-database-create" Dec 08 09:30:41 crc kubenswrapper[4662]: E1208 09:30:41.591085 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1faf9f8b-598e-4015-ae89-1e289ef305d9" containerName="mariadb-account-create-update" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.591135 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1faf9f8b-598e-4015-ae89-1e289ef305d9" containerName="mariadb-account-create-update" Dec 08 09:30:41 crc kubenswrapper[4662]: E1208 09:30:41.591190 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="355ee0bb-eab4-4a74-b65d-591809af525a" containerName="mariadb-account-create-update" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.591253 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="355ee0bb-eab4-4a74-b65d-591809af525a" containerName="mariadb-account-create-update" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.591502 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="355ee0bb-eab4-4a74-b65d-591809af525a" containerName="mariadb-account-create-update" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.591586 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="be309358-6c47-42fb-b818-014b339d4a53" containerName="mariadb-database-create" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.591666 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="1faf9f8b-598e-4015-ae89-1e289ef305d9" containerName="mariadb-account-create-update" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.591770 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c05fb17-8432-4eaa-a64c-a1255b17bdce" containerName="mariadb-database-create" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.592398 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.600364 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.608467 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tx8rx-config-fgpkf"] Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.672367 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run-ovn\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.672451 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8khr\" (UniqueName: \"kubernetes.io/projected/12021a0c-10d4-4421-bf54-eb1dd2867aa8-kube-api-access-g8khr\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.672503 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-log-ovn\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.672525 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-additional-scripts\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.672551 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-scripts\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.672612 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.774416 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-log-ovn\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.774471 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-additional-scripts\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.774497 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-scripts\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.774567 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.774629 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run-ovn\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.774675 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8khr\" (UniqueName: \"kubernetes.io/projected/12021a0c-10d4-4421-bf54-eb1dd2867aa8-kube-api-access-g8khr\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.776376 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.776453 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run-ovn\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.777011 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-additional-scripts\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.777555 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-scripts\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.777649 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-log-ovn\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.821899 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8khr\" (UniqueName: \"kubernetes.io/projected/12021a0c-10d4-4421-bf54-eb1dd2867aa8-kube-api-access-g8khr\") pod \"ovn-controller-tx8rx-config-fgpkf\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:41 crc kubenswrapper[4662]: I1208 09:30:41.912087 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:42 crc kubenswrapper[4662]: I1208 09:30:42.447279 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-tx8rx-config-fgpkf"] Dec 08 09:30:42 crc kubenswrapper[4662]: I1208 09:30:42.710703 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Dec 08 09:30:43 crc kubenswrapper[4662]: I1208 09:30:43.466665 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tx8rx-config-fgpkf" event={"ID":"12021a0c-10d4-4421-bf54-eb1dd2867aa8","Type":"ContainerStarted","Data":"872fe1d0dde292a7ddd187e280826ab4cd43c9aba6797c243281ddb134822f95"} Dec 08 09:30:43 crc kubenswrapper[4662]: I1208 09:30:43.466724 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tx8rx-config-fgpkf" event={"ID":"12021a0c-10d4-4421-bf54-eb1dd2867aa8","Type":"ContainerStarted","Data":"e1fe88644342fcacf073eda0265d4a6af439ae747d09edc55fd60673b243d9ba"} Dec 08 09:30:44 crc kubenswrapper[4662]: I1208 09:30:44.479613 4662 generic.go:334] "Generic (PLEG): container finished" podID="12021a0c-10d4-4421-bf54-eb1dd2867aa8" containerID="872fe1d0dde292a7ddd187e280826ab4cd43c9aba6797c243281ddb134822f95" exitCode=0 Dec 08 09:30:44 crc kubenswrapper[4662]: I1208 09:30:44.479869 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tx8rx-config-fgpkf" event={"ID":"12021a0c-10d4-4421-bf54-eb1dd2867aa8","Type":"ContainerDied","Data":"872fe1d0dde292a7ddd187e280826ab4cd43c9aba6797c243281ddb134822f95"} Dec 08 09:30:46 crc kubenswrapper[4662]: I1208 09:30:46.299495 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-tx8rx" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.060383 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.490396 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-z9n8v"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.499350 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.514174 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-z9n8v"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.566450 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ba3-account-create-update-8hzfj"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.568174 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.576799 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.586289 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ba3-account-create-update-8hzfj"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.618914 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xb4\" (UniqueName: \"kubernetes.io/projected/ec279565-62e7-416e-9518-f4fb11ad219b-kube-api-access-96xb4\") pod \"cinder-db-create-z9n8v\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.618988 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec279565-62e7-416e-9518-f4fb11ad219b-operator-scripts\") pod \"cinder-db-create-z9n8v\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.650803 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-nwbcp"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.653432 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.670071 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nwbcp"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.718209 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.720845 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96xb4\" (UniqueName: \"kubernetes.io/projected/ec279565-62e7-416e-9518-f4fb11ad219b-kube-api-access-96xb4\") pod \"cinder-db-create-z9n8v\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.720889 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbdk7\" (UniqueName: \"kubernetes.io/projected/b4e3ec89-606e-422b-a9e8-48f37a44cc61-kube-api-access-bbdk7\") pod \"cinder-6ba3-account-create-update-8hzfj\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.720917 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec279565-62e7-416e-9518-f4fb11ad219b-operator-scripts\") pod \"cinder-db-create-z9n8v\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.720952 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e3ec89-606e-422b-a9e8-48f37a44cc61-operator-scripts\") pod \"cinder-6ba3-account-create-update-8hzfj\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.721894 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec279565-62e7-416e-9518-f4fb11ad219b-operator-scripts\") pod \"cinder-db-create-z9n8v\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.762448 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96xb4\" (UniqueName: \"kubernetes.io/projected/ec279565-62e7-416e-9518-f4fb11ad219b-kube-api-access-96xb4\") pod \"cinder-db-create-z9n8v\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.812836 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-7655-account-create-update-vv749"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.813917 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.817729 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.821144 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z9n8v" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.822166 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-operator-scripts\") pod \"barbican-db-create-nwbcp\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.822293 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbdk7\" (UniqueName: \"kubernetes.io/projected/b4e3ec89-606e-422b-a9e8-48f37a44cc61-kube-api-access-bbdk7\") pod \"cinder-6ba3-account-create-update-8hzfj\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.822352 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-966rd\" (UniqueName: \"kubernetes.io/projected/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-kube-api-access-966rd\") pod \"barbican-db-create-nwbcp\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.822378 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e3ec89-606e-422b-a9e8-48f37a44cc61-operator-scripts\") pod \"cinder-6ba3-account-create-update-8hzfj\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.824952 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e3ec89-606e-422b-a9e8-48f37a44cc61-operator-scripts\") pod \"cinder-6ba3-account-create-update-8hzfj\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.838796 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7655-account-create-update-vv749"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.873464 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbdk7\" (UniqueName: \"kubernetes.io/projected/b4e3ec89-606e-422b-a9e8-48f37a44cc61-kube-api-access-bbdk7\") pod \"cinder-6ba3-account-create-update-8hzfj\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.899925 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.910913 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-pwrws"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.918160 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.920529 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pwrws"] Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.926878 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-operator-scripts\") pod \"barbican-db-create-nwbcp\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.927236 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6438122c-3da2-4f70-bf28-4c184f024504-operator-scripts\") pod \"barbican-7655-account-create-update-vv749\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.927345 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-966rd\" (UniqueName: \"kubernetes.io/projected/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-kube-api-access-966rd\") pod \"barbican-db-create-nwbcp\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.927612 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6sl8\" (UniqueName: \"kubernetes.io/projected/6438122c-3da2-4f70-bf28-4c184f024504-kube-api-access-h6sl8\") pod \"barbican-7655-account-create-update-vv749\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.927803 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-operator-scripts\") pod \"barbican-db-create-nwbcp\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.964323 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-966rd\" (UniqueName: \"kubernetes.io/projected/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-kube-api-access-966rd\") pod \"barbican-db-create-nwbcp\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:52 crc kubenswrapper[4662]: I1208 09:30:52.987151 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nwbcp" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.029719 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6438122c-3da2-4f70-bf28-4c184f024504-operator-scripts\") pod \"barbican-7655-account-create-update-vv749\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.029796 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-operator-scripts\") pod \"neutron-db-create-pwrws\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.029823 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6sl8\" (UniqueName: \"kubernetes.io/projected/6438122c-3da2-4f70-bf28-4c184f024504-kube-api-access-h6sl8\") pod \"barbican-7655-account-create-update-vv749\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.029861 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pbr9k\" (UniqueName: \"kubernetes.io/projected/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-kube-api-access-pbr9k\") pod \"neutron-db-create-pwrws\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.030485 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6438122c-3da2-4f70-bf28-4c184f024504-operator-scripts\") pod \"barbican-7655-account-create-update-vv749\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.089959 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5ll52"] Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.091052 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: W1208 09:30:53.124203 4662 reflector.go:561] object-"openstack"/"keystone-keystone-dockercfg-cm5ws": failed to list *v1.Secret: secrets "keystone-keystone-dockercfg-cm5ws" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Dec 08 09:30:53 crc kubenswrapper[4662]: E1208 09:30:53.124257 4662 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"keystone-keystone-dockercfg-cm5ws\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"keystone-keystone-dockercfg-cm5ws\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.124402 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.124684 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.124851 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6sl8\" (UniqueName: \"kubernetes.io/projected/6438122c-3da2-4f70-bf28-4c184f024504-kube-api-access-h6sl8\") pod \"barbican-7655-account-create-update-vv749\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.125114 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.142705 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-operator-scripts\") pod \"neutron-db-create-pwrws\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.142826 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pbr9k\" (UniqueName: \"kubernetes.io/projected/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-kube-api-access-pbr9k\") pod \"neutron-db-create-pwrws\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.144441 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-operator-scripts\") pod \"neutron-db-create-pwrws\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.151164 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.211802 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-39b5-account-create-update-296w8"] Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.212782 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.211823 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pbr9k\" (UniqueName: \"kubernetes.io/projected/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-kube-api-access-pbr9k\") pod \"neutron-db-create-pwrws\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.218597 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.225781 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-39b5-account-create-update-296w8"] Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.227274 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5ll52"] Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.247456 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99nzt\" (UniqueName: \"kubernetes.io/projected/70d93045-d92f-482f-af8d-97f6f752703b-kube-api-access-99nzt\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.247967 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-config-data\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.248067 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-combined-ca-bundle\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.253244 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pwrws" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.350241 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-config-data\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.350307 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-combined-ca-bundle\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.350350 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/378bb91e-6330-4052-94d8-01bf45db8010-operator-scripts\") pod \"neutron-39b5-account-create-update-296w8\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.350410 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99nzt\" (UniqueName: \"kubernetes.io/projected/70d93045-d92f-482f-af8d-97f6f752703b-kube-api-access-99nzt\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.350431 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9crth\" (UniqueName: \"kubernetes.io/projected/378bb91e-6330-4052-94d8-01bf45db8010-kube-api-access-9crth\") pod \"neutron-39b5-account-create-update-296w8\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.355292 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-combined-ca-bundle\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.359078 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-config-data\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.375397 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99nzt\" (UniqueName: \"kubernetes.io/projected/70d93045-d92f-482f-af8d-97f6f752703b-kube-api-access-99nzt\") pod \"keystone-db-sync-5ll52\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.452164 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/378bb91e-6330-4052-94d8-01bf45db8010-operator-scripts\") pod \"neutron-39b5-account-create-update-296w8\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.452247 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9crth\" (UniqueName: \"kubernetes.io/projected/378bb91e-6330-4052-94d8-01bf45db8010-kube-api-access-9crth\") pod \"neutron-39b5-account-create-update-296w8\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.453127 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/378bb91e-6330-4052-94d8-01bf45db8010-operator-scripts\") pod \"neutron-39b5-account-create-update-296w8\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.487982 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9crth\" (UniqueName: \"kubernetes.io/projected/378bb91e-6330-4052-94d8-01bf45db8010-kube-api-access-9crth\") pod \"neutron-39b5-account-create-update-296w8\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:53 crc kubenswrapper[4662]: I1208 09:30:53.574013 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.157587 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cm5ws" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.166571 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5ll52" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.562910 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.567582 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-tx8rx-config-fgpkf" event={"ID":"12021a0c-10d4-4421-bf54-eb1dd2867aa8","Type":"ContainerDied","Data":"e1fe88644342fcacf073eda0265d4a6af439ae747d09edc55fd60673b243d9ba"} Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.567629 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1fe88644342fcacf073eda0265d4a6af439ae747d09edc55fd60673b243d9ba" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.567683 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-tx8rx-config-fgpkf" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.670802 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-additional-scripts\") pod \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.670884 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run\") pod \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.670924 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-scripts\") pod \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.670939 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run-ovn\") pod \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.671013 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-log-ovn\") pod \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.671056 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8khr\" (UniqueName: \"kubernetes.io/projected/12021a0c-10d4-4421-bf54-eb1dd2867aa8-kube-api-access-g8khr\") pod \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\" (UID: \"12021a0c-10d4-4421-bf54-eb1dd2867aa8\") " Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.672857 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-scripts" (OuterVolumeSpecName: "scripts") pod "12021a0c-10d4-4421-bf54-eb1dd2867aa8" (UID: "12021a0c-10d4-4421-bf54-eb1dd2867aa8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.673712 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "12021a0c-10d4-4421-bf54-eb1dd2867aa8" (UID: "12021a0c-10d4-4421-bf54-eb1dd2867aa8"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.673765 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run" (OuterVolumeSpecName: "var-run") pod "12021a0c-10d4-4421-bf54-eb1dd2867aa8" (UID: "12021a0c-10d4-4421-bf54-eb1dd2867aa8"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.673793 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "12021a0c-10d4-4421-bf54-eb1dd2867aa8" (UID: "12021a0c-10d4-4421-bf54-eb1dd2867aa8"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.673814 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "12021a0c-10d4-4421-bf54-eb1dd2867aa8" (UID: "12021a0c-10d4-4421-bf54-eb1dd2867aa8"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.686343 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12021a0c-10d4-4421-bf54-eb1dd2867aa8-kube-api-access-g8khr" (OuterVolumeSpecName: "kube-api-access-g8khr") pod "12021a0c-10d4-4421-bf54-eb1dd2867aa8" (UID: "12021a0c-10d4-4421-bf54-eb1dd2867aa8"). InnerVolumeSpecName "kube-api-access-g8khr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.774306 4662 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.774337 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.774349 4662 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.774361 4662 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/12021a0c-10d4-4421-bf54-eb1dd2867aa8-var-log-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.774381 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8khr\" (UniqueName: \"kubernetes.io/projected/12021a0c-10d4-4421-bf54-eb1dd2867aa8-kube-api-access-g8khr\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:54 crc kubenswrapper[4662]: I1208 09:30:54.774395 4662 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/12021a0c-10d4-4421-bf54-eb1dd2867aa8-additional-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.507208 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nwbcp"] Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.517125 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-z9n8v"] Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.623990 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-z9n8v" event={"ID":"ec279565-62e7-416e-9518-f4fb11ad219b","Type":"ContainerStarted","Data":"62083cbcea278a9b031c73274de817c64491351e9d0153b0d199056dd075ad06"} Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.655789 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nwbcp" event={"ID":"1b9e44c6-473a-42d0-8c92-6542c91e8e1e","Type":"ContainerStarted","Data":"2501df015ff116f06fc46011aa30ca9a34f89cd7fca36cc02c916eccb89ca39f"} Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.730669 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-pwrws"] Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.807412 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-7655-account-create-update-vv749"] Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.832691 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5ll52"] Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.843203 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-tx8rx-config-fgpkf"] Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.850663 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-tx8rx-config-fgpkf"] Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.881835 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-39b5-account-create-update-296w8"] Dec 08 09:30:55 crc kubenswrapper[4662]: W1208 09:30:55.902307 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod378bb91e_6330_4052_94d8_01bf45db8010.slice/crio-224ce30b83af3f495aaab1b83fde555839ed2177d8c9594e9219dd924ae03f93 WatchSource:0}: Error finding container 224ce30b83af3f495aaab1b83fde555839ed2177d8c9594e9219dd924ae03f93: Status 404 returned error can't find the container with id 224ce30b83af3f495aaab1b83fde555839ed2177d8c9594e9219dd924ae03f93 Dec 08 09:30:55 crc kubenswrapper[4662]: I1208 09:30:55.983457 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ba3-account-create-update-8hzfj"] Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.672679 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bkngw" event={"ID":"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e","Type":"ContainerStarted","Data":"0fdeca8043e91d3ea9633100c4c128efca4e9a1a499252a06dd77f8ad170b14e"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.674900 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7655-account-create-update-vv749" event={"ID":"6438122c-3da2-4f70-bf28-4c184f024504","Type":"ContainerStarted","Data":"6d64009b9a61d283afc45b4907f60760cd657093efed26a2e539ee736ae077b3"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.676420 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5ll52" event={"ID":"70d93045-d92f-482f-af8d-97f6f752703b","Type":"ContainerStarted","Data":"5ff02ec908beab71439933ab7827384f6bd7f197e4f45db4eafbe406c0a5024c"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.679657 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-39b5-account-create-update-296w8" event={"ID":"378bb91e-6330-4052-94d8-01bf45db8010","Type":"ContainerStarted","Data":"224ce30b83af3f495aaab1b83fde555839ed2177d8c9594e9219dd924ae03f93"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.682695 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ba3-account-create-update-8hzfj" event={"ID":"b4e3ec89-606e-422b-a9e8-48f37a44cc61","Type":"ContainerStarted","Data":"14f077d92ddc09cef1de8eeec62a488b7bd527f6f31a97628b79914b8e09ecf1"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.682728 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ba3-account-create-update-8hzfj" event={"ID":"b4e3ec89-606e-422b-a9e8-48f37a44cc61","Type":"ContainerStarted","Data":"c2c4aa8889010b40027f214e7a57f95898bba29b11b8588b7990c3bdb67e7c2f"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.691851 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bkngw" podStartSLOduration=3.127199375 podStartE2EDuration="20.691839028s" podCreationTimestamp="2025-12-08 09:30:36 +0000 UTC" firstStartedPulling="2025-12-08 09:30:37.123480055 +0000 UTC m=+960.692508045" lastFinishedPulling="2025-12-08 09:30:54.688119708 +0000 UTC m=+978.257147698" observedRunningTime="2025-12-08 09:30:56.688444166 +0000 UTC m=+980.257472146" watchObservedRunningTime="2025-12-08 09:30:56.691839028 +0000 UTC m=+980.260867018" Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.724136 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12021a0c-10d4-4421-bf54-eb1dd2867aa8" path="/var/lib/kubelet/pods/12021a0c-10d4-4421-bf54-eb1dd2867aa8/volumes" Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.725752 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pwrws" event={"ID":"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037","Type":"ContainerStarted","Data":"5c173b91e7c51723c1260d1e71f9796b2d2ba6326221457687587dc505346205"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.725779 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nwbcp" event={"ID":"1b9e44c6-473a-42d0-8c92-6542c91e8e1e","Type":"ContainerStarted","Data":"ff2580b16c2638a6f12d2169f75eb8b9e05aa8365acccb82f8220248c9bfc2e9"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.736011 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-z9n8v" event={"ID":"ec279565-62e7-416e-9518-f4fb11ad219b","Type":"ContainerStarted","Data":"05535c9b3048a3a46b6fc0c83b08b9c1f57d7ed7a0f2b336445a82a4181a7a41"} Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.752474 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ba3-account-create-update-8hzfj" podStartSLOduration=4.752452896 podStartE2EDuration="4.752452896s" podCreationTimestamp="2025-12-08 09:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:56.733157075 +0000 UTC m=+980.302185065" watchObservedRunningTime="2025-12-08 09:30:56.752452896 +0000 UTC m=+980.321480886" Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.762682 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-nwbcp" podStartSLOduration=4.762665912 podStartE2EDuration="4.762665912s" podCreationTimestamp="2025-12-08 09:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:56.75628212 +0000 UTC m=+980.325310120" watchObservedRunningTime="2025-12-08 09:30:56.762665912 +0000 UTC m=+980.331693902" Dec 08 09:30:56 crc kubenswrapper[4662]: I1208 09:30:56.882139 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-z9n8v" podStartSLOduration=4.882117132 podStartE2EDuration="4.882117132s" podCreationTimestamp="2025-12-08 09:30:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:30:56.874696221 +0000 UTC m=+980.443724221" watchObservedRunningTime="2025-12-08 09:30:56.882117132 +0000 UTC m=+980.451145122" Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.745202 4662 generic.go:334] "Generic (PLEG): container finished" podID="378bb91e-6330-4052-94d8-01bf45db8010" containerID="6fb3629dfc4441ef54b396654da65ab66742f145ab9d0e540fe40bb64c8de941" exitCode=0 Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.745582 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-39b5-account-create-update-296w8" event={"ID":"378bb91e-6330-4052-94d8-01bf45db8010","Type":"ContainerDied","Data":"6fb3629dfc4441ef54b396654da65ab66742f145ab9d0e540fe40bb64c8de941"} Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.747603 4662 generic.go:334] "Generic (PLEG): container finished" podID="b4e3ec89-606e-422b-a9e8-48f37a44cc61" containerID="14f077d92ddc09cef1de8eeec62a488b7bd527f6f31a97628b79914b8e09ecf1" exitCode=0 Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.747699 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ba3-account-create-update-8hzfj" event={"ID":"b4e3ec89-606e-422b-a9e8-48f37a44cc61","Type":"ContainerDied","Data":"14f077d92ddc09cef1de8eeec62a488b7bd527f6f31a97628b79914b8e09ecf1"} Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.754691 4662 generic.go:334] "Generic (PLEG): container finished" podID="c5dbeb2a-e269-4ad0-8abd-dcf1547c6037" containerID="79eac6f16770799e1eec94f762154403a6b4e608833cfbe2f9acd1b3a4bcb929" exitCode=0 Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.754843 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pwrws" event={"ID":"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037","Type":"ContainerDied","Data":"79eac6f16770799e1eec94f762154403a6b4e608833cfbe2f9acd1b3a4bcb929"} Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.763208 4662 generic.go:334] "Generic (PLEG): container finished" podID="1b9e44c6-473a-42d0-8c92-6542c91e8e1e" containerID="ff2580b16c2638a6f12d2169f75eb8b9e05aa8365acccb82f8220248c9bfc2e9" exitCode=0 Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.763286 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nwbcp" event={"ID":"1b9e44c6-473a-42d0-8c92-6542c91e8e1e","Type":"ContainerDied","Data":"ff2580b16c2638a6f12d2169f75eb8b9e05aa8365acccb82f8220248c9bfc2e9"} Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.765528 4662 generic.go:334] "Generic (PLEG): container finished" podID="ec279565-62e7-416e-9518-f4fb11ad219b" containerID="05535c9b3048a3a46b6fc0c83b08b9c1f57d7ed7a0f2b336445a82a4181a7a41" exitCode=0 Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.765578 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-z9n8v" event={"ID":"ec279565-62e7-416e-9518-f4fb11ad219b","Type":"ContainerDied","Data":"05535c9b3048a3a46b6fc0c83b08b9c1f57d7ed7a0f2b336445a82a4181a7a41"} Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.767296 4662 generic.go:334] "Generic (PLEG): container finished" podID="6438122c-3da2-4f70-bf28-4c184f024504" containerID="ff796602639723d2bb13027221f63eeab1fbcd492d5ab690e5cfc66db60b6f26" exitCode=0 Dec 08 09:30:57 crc kubenswrapper[4662]: I1208 09:30:57.767932 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7655-account-create-update-vv749" event={"ID":"6438122c-3da2-4f70-bf28-4c184f024504","Type":"ContainerDied","Data":"ff796602639723d2bb13027221f63eeab1fbcd492d5ab690e5cfc66db60b6f26"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.587001 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z9n8v" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.604650 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.610199 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.617789 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nwbcp" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.627459 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.663128 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pwrws" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.737526 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e3ec89-606e-422b-a9e8-48f37a44cc61-operator-scripts\") pod \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.737634 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-966rd\" (UniqueName: \"kubernetes.io/projected/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-kube-api-access-966rd\") pod \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.737807 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9crth\" (UniqueName: \"kubernetes.io/projected/378bb91e-6330-4052-94d8-01bf45db8010-kube-api-access-9crth\") pod \"378bb91e-6330-4052-94d8-01bf45db8010\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.737851 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec279565-62e7-416e-9518-f4fb11ad219b-operator-scripts\") pod \"ec279565-62e7-416e-9518-f4fb11ad219b\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.737889 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bbdk7\" (UniqueName: \"kubernetes.io/projected/b4e3ec89-606e-422b-a9e8-48f37a44cc61-kube-api-access-bbdk7\") pod \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\" (UID: \"b4e3ec89-606e-422b-a9e8-48f37a44cc61\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.737938 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6sl8\" (UniqueName: \"kubernetes.io/projected/6438122c-3da2-4f70-bf28-4c184f024504-kube-api-access-h6sl8\") pod \"6438122c-3da2-4f70-bf28-4c184f024504\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.737964 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/378bb91e-6330-4052-94d8-01bf45db8010-operator-scripts\") pod \"378bb91e-6330-4052-94d8-01bf45db8010\" (UID: \"378bb91e-6330-4052-94d8-01bf45db8010\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.738106 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6438122c-3da2-4f70-bf28-4c184f024504-operator-scripts\") pod \"6438122c-3da2-4f70-bf28-4c184f024504\" (UID: \"6438122c-3da2-4f70-bf28-4c184f024504\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.738144 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-operator-scripts\") pod \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\" (UID: \"1b9e44c6-473a-42d0-8c92-6542c91e8e1e\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.738168 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96xb4\" (UniqueName: \"kubernetes.io/projected/ec279565-62e7-416e-9518-f4fb11ad219b-kube-api-access-96xb4\") pod \"ec279565-62e7-416e-9518-f4fb11ad219b\" (UID: \"ec279565-62e7-416e-9518-f4fb11ad219b\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.739591 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6438122c-3da2-4f70-bf28-4c184f024504-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6438122c-3da2-4f70-bf28-4c184f024504" (UID: "6438122c-3da2-4f70-bf28-4c184f024504"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.740010 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1b9e44c6-473a-42d0-8c92-6542c91e8e1e" (UID: "1b9e44c6-473a-42d0-8c92-6542c91e8e1e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.740775 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4e3ec89-606e-422b-a9e8-48f37a44cc61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4e3ec89-606e-422b-a9e8-48f37a44cc61" (UID: "b4e3ec89-606e-422b-a9e8-48f37a44cc61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.741832 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec279565-62e7-416e-9518-f4fb11ad219b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ec279565-62e7-416e-9518-f4fb11ad219b" (UID: "ec279565-62e7-416e-9518-f4fb11ad219b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.742331 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/378bb91e-6330-4052-94d8-01bf45db8010-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "378bb91e-6330-4052-94d8-01bf45db8010" (UID: "378bb91e-6330-4052-94d8-01bf45db8010"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.747532 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec279565-62e7-416e-9518-f4fb11ad219b-kube-api-access-96xb4" (OuterVolumeSpecName: "kube-api-access-96xb4") pod "ec279565-62e7-416e-9518-f4fb11ad219b" (UID: "ec279565-62e7-416e-9518-f4fb11ad219b"). InnerVolumeSpecName "kube-api-access-96xb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.750037 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6438122c-3da2-4f70-bf28-4c184f024504-kube-api-access-h6sl8" (OuterVolumeSpecName: "kube-api-access-h6sl8") pod "6438122c-3da2-4f70-bf28-4c184f024504" (UID: "6438122c-3da2-4f70-bf28-4c184f024504"). InnerVolumeSpecName "kube-api-access-h6sl8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.759774 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-kube-api-access-966rd" (OuterVolumeSpecName: "kube-api-access-966rd") pod "1b9e44c6-473a-42d0-8c92-6542c91e8e1e" (UID: "1b9e44c6-473a-42d0-8c92-6542c91e8e1e"). InnerVolumeSpecName "kube-api-access-966rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.760072 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378bb91e-6330-4052-94d8-01bf45db8010-kube-api-access-9crth" (OuterVolumeSpecName: "kube-api-access-9crth") pod "378bb91e-6330-4052-94d8-01bf45db8010" (UID: "378bb91e-6330-4052-94d8-01bf45db8010"). InnerVolumeSpecName "kube-api-access-9crth". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.764219 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4e3ec89-606e-422b-a9e8-48f37a44cc61-kube-api-access-bbdk7" (OuterVolumeSpecName: "kube-api-access-bbdk7") pod "b4e3ec89-606e-422b-a9e8-48f37a44cc61" (UID: "b4e3ec89-606e-422b-a9e8-48f37a44cc61"). InnerVolumeSpecName "kube-api-access-bbdk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.818044 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nwbcp" event={"ID":"1b9e44c6-473a-42d0-8c92-6542c91e8e1e","Type":"ContainerDied","Data":"2501df015ff116f06fc46011aa30ca9a34f89cd7fca36cc02c916eccb89ca39f"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.818089 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2501df015ff116f06fc46011aa30ca9a34f89cd7fca36cc02c916eccb89ca39f" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.818063 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nwbcp" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.819777 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-z9n8v" event={"ID":"ec279565-62e7-416e-9518-f4fb11ad219b","Type":"ContainerDied","Data":"62083cbcea278a9b031c73274de817c64491351e9d0153b0d199056dd075ad06"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.819808 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="62083cbcea278a9b031c73274de817c64491351e9d0153b0d199056dd075ad06" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.819863 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-z9n8v" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.821375 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-7655-account-create-update-vv749" event={"ID":"6438122c-3da2-4f70-bf28-4c184f024504","Type":"ContainerDied","Data":"6d64009b9a61d283afc45b4907f60760cd657093efed26a2e539ee736ae077b3"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.821413 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d64009b9a61d283afc45b4907f60760cd657093efed26a2e539ee736ae077b3" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.821462 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-7655-account-create-update-vv749" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.834031 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5ll52" event={"ID":"70d93045-d92f-482f-af8d-97f6f752703b","Type":"ContainerStarted","Data":"e8e6e80130922278f8705eed3367f4d58a1013041839492bb9cd1297585f2608"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.837263 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-39b5-account-create-update-296w8" event={"ID":"378bb91e-6330-4052-94d8-01bf45db8010","Type":"ContainerDied","Data":"224ce30b83af3f495aaab1b83fde555839ed2177d8c9594e9219dd924ae03f93"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.837303 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="224ce30b83af3f495aaab1b83fde555839ed2177d8c9594e9219dd924ae03f93" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.837374 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-39b5-account-create-update-296w8" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.838637 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-operator-scripts\") pod \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.838850 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pbr9k\" (UniqueName: \"kubernetes.io/projected/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-kube-api-access-pbr9k\") pod \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\" (UID: \"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037\") " Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839142 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4e3ec89-606e-422b-a9e8-48f37a44cc61-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839153 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-966rd\" (UniqueName: \"kubernetes.io/projected/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-kube-api-access-966rd\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839164 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9crth\" (UniqueName: \"kubernetes.io/projected/378bb91e-6330-4052-94d8-01bf45db8010-kube-api-access-9crth\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839173 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ec279565-62e7-416e-9518-f4fb11ad219b-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839181 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bbdk7\" (UniqueName: \"kubernetes.io/projected/b4e3ec89-606e-422b-a9e8-48f37a44cc61-kube-api-access-bbdk7\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839189 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6sl8\" (UniqueName: \"kubernetes.io/projected/6438122c-3da2-4f70-bf28-4c184f024504-kube-api-access-h6sl8\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839197 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/378bb91e-6330-4052-94d8-01bf45db8010-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839205 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6438122c-3da2-4f70-bf28-4c184f024504-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839213 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1b9e44c6-473a-42d0-8c92-6542c91e8e1e-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.839221 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96xb4\" (UniqueName: \"kubernetes.io/projected/ec279565-62e7-416e-9518-f4fb11ad219b-kube-api-access-96xb4\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.840484 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c5dbeb2a-e269-4ad0-8abd-dcf1547c6037" (UID: "c5dbeb2a-e269-4ad0-8abd-dcf1547c6037"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.841296 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ba3-account-create-update-8hzfj" event={"ID":"b4e3ec89-606e-422b-a9e8-48f37a44cc61","Type":"ContainerDied","Data":"c2c4aa8889010b40027f214e7a57f95898bba29b11b8588b7990c3bdb67e7c2f"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.841335 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c4aa8889010b40027f214e7a57f95898bba29b11b8588b7990c3bdb67e7c2f" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.841405 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ba3-account-create-update-8hzfj" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.843414 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-kube-api-access-pbr9k" (OuterVolumeSpecName: "kube-api-access-pbr9k") pod "c5dbeb2a-e269-4ad0-8abd-dcf1547c6037" (UID: "c5dbeb2a-e269-4ad0-8abd-dcf1547c6037"). InnerVolumeSpecName "kube-api-access-pbr9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.847620 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-pwrws" event={"ID":"c5dbeb2a-e269-4ad0-8abd-dcf1547c6037","Type":"ContainerDied","Data":"5c173b91e7c51723c1260d1e71f9796b2d2ba6326221457687587dc505346205"} Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.847810 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c173b91e7c51723c1260d1e71f9796b2d2ba6326221457687587dc505346205" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.847674 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-pwrws" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.860015 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5ll52" podStartSLOduration=3.230866975 podStartE2EDuration="9.859996107s" podCreationTimestamp="2025-12-08 09:30:53 +0000 UTC" firstStartedPulling="2025-12-08 09:30:55.789848033 +0000 UTC m=+979.358876023" lastFinishedPulling="2025-12-08 09:31:02.418977145 +0000 UTC m=+985.988005155" observedRunningTime="2025-12-08 09:31:02.854324094 +0000 UTC m=+986.423352084" watchObservedRunningTime="2025-12-08 09:31:02.859996107 +0000 UTC m=+986.429024097" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.941110 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pbr9k\" (UniqueName: \"kubernetes.io/projected/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-kube-api-access-pbr9k\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:02 crc kubenswrapper[4662]: I1208 09:31:02.941143 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:06 crc kubenswrapper[4662]: I1208 09:31:06.878941 4662 generic.go:334] "Generic (PLEG): container finished" podID="70d93045-d92f-482f-af8d-97f6f752703b" containerID="e8e6e80130922278f8705eed3367f4d58a1013041839492bb9cd1297585f2608" exitCode=0 Dec 08 09:31:06 crc kubenswrapper[4662]: I1208 09:31:06.879027 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5ll52" event={"ID":"70d93045-d92f-482f-af8d-97f6f752703b","Type":"ContainerDied","Data":"e8e6e80130922278f8705eed3367f4d58a1013041839492bb9cd1297585f2608"} Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.211723 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5ll52" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.327809 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99nzt\" (UniqueName: \"kubernetes.io/projected/70d93045-d92f-482f-af8d-97f6f752703b-kube-api-access-99nzt\") pod \"70d93045-d92f-482f-af8d-97f6f752703b\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.327906 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-config-data\") pod \"70d93045-d92f-482f-af8d-97f6f752703b\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.327986 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-combined-ca-bundle\") pod \"70d93045-d92f-482f-af8d-97f6f752703b\" (UID: \"70d93045-d92f-482f-af8d-97f6f752703b\") " Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.336142 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70d93045-d92f-482f-af8d-97f6f752703b-kube-api-access-99nzt" (OuterVolumeSpecName: "kube-api-access-99nzt") pod "70d93045-d92f-482f-af8d-97f6f752703b" (UID: "70d93045-d92f-482f-af8d-97f6f752703b"). InnerVolumeSpecName "kube-api-access-99nzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.371586 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-config-data" (OuterVolumeSpecName: "config-data") pod "70d93045-d92f-482f-af8d-97f6f752703b" (UID: "70d93045-d92f-482f-af8d-97f6f752703b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.379125 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70d93045-d92f-482f-af8d-97f6f752703b" (UID: "70d93045-d92f-482f-af8d-97f6f752703b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.429871 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.429911 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70d93045-d92f-482f-af8d-97f6f752703b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.429924 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99nzt\" (UniqueName: \"kubernetes.io/projected/70d93045-d92f-482f-af8d-97f6f752703b-kube-api-access-99nzt\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.896673 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5ll52" event={"ID":"70d93045-d92f-482f-af8d-97f6f752703b","Type":"ContainerDied","Data":"5ff02ec908beab71439933ab7827384f6bd7f197e4f45db4eafbe406c0a5024c"} Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.896717 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ff02ec908beab71439933ab7827384f6bd7f197e4f45db4eafbe406c0a5024c" Dec 08 09:31:08 crc kubenswrapper[4662]: I1208 09:31:08.896774 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5ll52" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.104067 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-6ck72"] Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.104631 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12021a0c-10d4-4421-bf54-eb1dd2867aa8" containerName="ovn-config" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.104733 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="12021a0c-10d4-4421-bf54-eb1dd2867aa8" containerName="ovn-config" Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.104809 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec279565-62e7-416e-9518-f4fb11ad219b" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.104862 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec279565-62e7-416e-9518-f4fb11ad219b" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.104930 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5dbeb2a-e269-4ad0-8abd-dcf1547c6037" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.104994 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5dbeb2a-e269-4ad0-8abd-dcf1547c6037" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.105061 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b9e44c6-473a-42d0-8c92-6542c91e8e1e" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.105112 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b9e44c6-473a-42d0-8c92-6542c91e8e1e" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.105179 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6438122c-3da2-4f70-bf28-4c184f024504" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.105231 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="6438122c-3da2-4f70-bf28-4c184f024504" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.105306 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4e3ec89-606e-422b-a9e8-48f37a44cc61" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.105363 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4e3ec89-606e-422b-a9e8-48f37a44cc61" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.105428 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70d93045-d92f-482f-af8d-97f6f752703b" containerName="keystone-db-sync" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.105483 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="70d93045-d92f-482f-af8d-97f6f752703b" containerName="keystone-db-sync" Dec 08 09:31:09 crc kubenswrapper[4662]: E1208 09:31:09.105558 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378bb91e-6330-4052-94d8-01bf45db8010" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.105643 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="378bb91e-6330-4052-94d8-01bf45db8010" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.105925 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec279565-62e7-416e-9518-f4fb11ad219b" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.106046 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b9e44c6-473a-42d0-8c92-6542c91e8e1e" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.106144 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="12021a0c-10d4-4421-bf54-eb1dd2867aa8" containerName="ovn-config" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.106221 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="378bb91e-6330-4052-94d8-01bf45db8010" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.106286 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="70d93045-d92f-482f-af8d-97f6f752703b" containerName="keystone-db-sync" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.106359 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4e3ec89-606e-422b-a9e8-48f37a44cc61" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.106448 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="6438122c-3da2-4f70-bf28-4c184f024504" containerName="mariadb-account-create-update" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.106543 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5dbeb2a-e269-4ad0-8abd-dcf1547c6037" containerName="mariadb-database-create" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.107691 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.133434 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-6ck72"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.173085 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-m9kkg"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.180240 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.187165 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.187361 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cm5ws" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.187376 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.187542 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.187656 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.198464 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m9kkg"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.241553 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.241626 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnl9\" (UniqueName: \"kubernetes.io/projected/eb19e016-973d-4294-843d-2b24ba836b74-kube-api-access-wrnl9\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.241679 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.241700 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.241827 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-config\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.331856 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-cgx89"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.333157 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.340309 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.343865 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-credential-keys\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.343931 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.343967 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344009 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-config-data\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344028 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-scripts\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344045 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-config\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344064 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4k5k\" (UniqueName: \"kubernetes.io/projected/38b4ea33-84ea-4abf-9afd-3bb571219409-kube-api-access-f4k5k\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344089 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-combined-ca-bundle\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344145 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344184 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wrnl9\" (UniqueName: \"kubernetes.io/projected/eb19e016-973d-4294-843d-2b24ba836b74-kube-api-access-wrnl9\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.344209 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-fernet-keys\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.345068 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-dns-svc\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.345401 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-config\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.345591 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-nb\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.346387 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-sb\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.348674 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.349008 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qrkj4" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.391290 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cgx89"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.400796 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wrnl9\" (UniqueName: \"kubernetes.io/projected/eb19e016-973d-4294-843d-2b24ba836b74-kube-api-access-wrnl9\") pod \"dnsmasq-dns-66fbd85b65-6ck72\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.418376 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-z4wmv"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.420043 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.423188 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.426116 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.427255 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.427640 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7cjqq" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.446903 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-fernet-keys\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.446978 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-credential-keys\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.447012 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw57g\" (UniqueName: \"kubernetes.io/projected/34dda116-fa4c-44db-bfd6-0077e8060c33-kube-api-access-dw57g\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.447071 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-config-data\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.447092 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-scripts\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.447119 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f4k5k\" (UniqueName: \"kubernetes.io/projected/38b4ea33-84ea-4abf-9afd-3bb571219409-kube-api-access-f4k5k\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.447147 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-combined-ca-bundle\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.447186 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-combined-ca-bundle\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.447238 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-config\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.451373 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-fernet-keys\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.458631 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-combined-ca-bundle\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.458813 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-scripts\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.471243 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-credential-keys\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.471485 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-config-data\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.499943 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-z4wmv"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550042 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-config-data\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550121 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-db-sync-config-data\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550163 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw57g\" (UniqueName: \"kubernetes.io/projected/34dda116-fa4c-44db-bfd6-0077e8060c33-kube-api-access-dw57g\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550208 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-scripts\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550267 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-combined-ca-bundle\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550312 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-combined-ca-bundle\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550366 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-config\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550388 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn2dz\" (UniqueName: \"kubernetes.io/projected/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-kube-api-access-pn2dz\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.550426 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-etc-machine-id\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.558332 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4k5k\" (UniqueName: \"kubernetes.io/projected/38b4ea33-84ea-4abf-9afd-3bb571219409-kube-api-access-f4k5k\") pod \"keystone-bootstrap-m9kkg\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.594108 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw57g\" (UniqueName: \"kubernetes.io/projected/34dda116-fa4c-44db-bfd6-0077e8060c33-kube-api-access-dw57g\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.594589 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-config\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.595309 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-combined-ca-bundle\") pod \"neutron-db-sync-cgx89\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.620860 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.623216 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.649003 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.651843 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-combined-ca-bundle\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.651941 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pn2dz\" (UniqueName: \"kubernetes.io/projected/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-kube-api-access-pn2dz\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.651974 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-etc-machine-id\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.651999 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-config-data\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.652041 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-db-sync-config-data\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.652078 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-scripts\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.653812 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.654292 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-etc-machine-id\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.654447 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.655791 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-scripts\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.658341 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-combined-ca-bundle\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.659482 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-config-data\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.662109 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.662541 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-db-sync-config-data\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.675205 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-4v9mq"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.676285 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.678568 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pn2dz\" (UniqueName: \"kubernetes.io/projected/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-kube-api-access-pn2dz\") pod \"cinder-db-sync-z4wmv\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.682122 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vssmw" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.682469 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.685943 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.716170 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-6ck72"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.745479 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4v9mq"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.757591 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-log-httpd\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.757645 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9c4\" (UniqueName: \"kubernetes.io/projected/67a7de60-620b-4857-839c-4587bf1cab11-kube-api-access-7v9c4\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.757662 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-run-httpd\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.757701 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-config-data\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.757720 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.757759 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-scripts\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.757795 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.806551 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-qcgpk"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.807111 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.808257 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.865868 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-qcgpk"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.865930 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-xd547"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.909820 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.914277 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xd547"] Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.914444 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-47t8j" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932051 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-run-httpd\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932199 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-config-data\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932239 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-config-data\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932269 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932322 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2234708-fc8c-42ec-91f3-8dfaffca750e-logs\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932361 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-scripts\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932470 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932678 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-scripts\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932712 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-config\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932777 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnzl6\" (UniqueName: \"kubernetes.io/projected/b11e546d-17fd-4aae-a904-879bf7264818-kube-api-access-pnzl6\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932854 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brkpx\" (UniqueName: \"kubernetes.io/projected/d2234708-fc8c-42ec-91f3-8dfaffca750e-kube-api-access-brkpx\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932885 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-combined-ca-bundle\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932937 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.932968 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.933034 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-log-httpd\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.933072 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.933121 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v9c4\" (UniqueName: \"kubernetes.io/projected/67a7de60-620b-4857-839c-4587bf1cab11-kube-api-access-7v9c4\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.933676 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.937158 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-run-httpd\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.945688 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-log-httpd\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.955267 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.958194 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.968195 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.968229 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-config-data\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.973109 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v9c4\" (UniqueName: \"kubernetes.io/projected/67a7de60-620b-4857-839c-4587bf1cab11-kube-api-access-7v9c4\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.978669 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-scripts\") pod \"ceilometer-0\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " pod="openstack/ceilometer-0" Dec 08 09:31:09 crc kubenswrapper[4662]: I1208 09:31:09.980724 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.034963 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-scripts\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035007 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-config\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035028 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnzl6\" (UniqueName: \"kubernetes.io/projected/b11e546d-17fd-4aae-a904-879bf7264818-kube-api-access-pnzl6\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035062 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-combined-ca-bundle\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035084 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brkpx\" (UniqueName: \"kubernetes.io/projected/d2234708-fc8c-42ec-91f3-8dfaffca750e-kube-api-access-brkpx\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035099 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035118 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035143 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6m2g\" (UniqueName: \"kubernetes.io/projected/a3cedcd2-d44c-4c45-acc9-384d45424740-kube-api-access-f6m2g\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035165 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035190 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-combined-ca-bundle\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035214 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-db-sync-config-data\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035245 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-config-data\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035269 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2234708-fc8c-42ec-91f3-8dfaffca750e-logs\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.035694 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2234708-fc8c-42ec-91f3-8dfaffca750e-logs\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.038827 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-sb\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.039362 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-config\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.047196 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-nb\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.047858 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-dns-svc\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.056463 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-config-data\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.056734 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-scripts\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.076301 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-combined-ca-bundle\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.090767 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brkpx\" (UniqueName: \"kubernetes.io/projected/d2234708-fc8c-42ec-91f3-8dfaffca750e-kube-api-access-brkpx\") pod \"placement-db-sync-4v9mq\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.091120 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnzl6\" (UniqueName: \"kubernetes.io/projected/b11e546d-17fd-4aae-a904-879bf7264818-kube-api-access-pnzl6\") pod \"dnsmasq-dns-6bf59f66bf-qcgpk\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.136551 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6m2g\" (UniqueName: \"kubernetes.io/projected/a3cedcd2-d44c-4c45-acc9-384d45424740-kube-api-access-f6m2g\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.136815 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-combined-ca-bundle\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.136905 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-db-sync-config-data\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.142304 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-db-sync-config-data\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.149487 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-combined-ca-bundle\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.177379 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6m2g\" (UniqueName: \"kubernetes.io/projected/a3cedcd2-d44c-4c45-acc9-384d45424740-kube-api-access-f6m2g\") pod \"barbican-db-sync-xd547\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.200154 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.303074 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xd547" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.315646 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.451028 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-6ck72"] Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.672473 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-cgx89"] Dec 08 09:31:10 crc kubenswrapper[4662]: I1208 09:31:10.817527 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-m9kkg"] Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.014708 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m9kkg" event={"ID":"38b4ea33-84ea-4abf-9afd-3bb571219409","Type":"ContainerStarted","Data":"23f5d861018f1261de6b81941b34ccac4cdc8b430b13a38aba5bbf004266dab9"} Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.017222 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgx89" event={"ID":"34dda116-fa4c-44db-bfd6-0077e8060c33","Type":"ContainerStarted","Data":"d10707659fa77e53397fbd2d762d3464543050943c5f8bbf5a392d410bc46715"} Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.018332 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" event={"ID":"eb19e016-973d-4294-843d-2b24ba836b74","Type":"ContainerStarted","Data":"c82cb246bcfc942939d659802e33557b61b30e37bdfea55a0595d9eec0cde290"} Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.079152 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.157489 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-z4wmv"] Dec 08 09:31:11 crc kubenswrapper[4662]: W1208 09:31:11.165209 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74a3b1e6_10ba_4951_9aa3_ec2173c7e0dc.slice/crio-4756f582ca2664d59b65666b90aabd23f666802fcee1bcdb69a56eaf9c0f7e72 WatchSource:0}: Error finding container 4756f582ca2664d59b65666b90aabd23f666802fcee1bcdb69a56eaf9c0f7e72: Status 404 returned error can't find the container with id 4756f582ca2664d59b65666b90aabd23f666802fcee1bcdb69a56eaf9c0f7e72 Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.195029 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-qcgpk"] Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.447985 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-xd547"] Dec 08 09:31:11 crc kubenswrapper[4662]: I1208 09:31:11.479422 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-4v9mq"] Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.042567 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerStarted","Data":"d4446418dceb37c9d390b251e5fc1e1c0b5bb1abfd298fbcec3a726d9172d8fa"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.059646 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgx89" event={"ID":"34dda116-fa4c-44db-bfd6-0077e8060c33","Type":"ContainerStarted","Data":"964b26ff1d4b27498f7b3742cb7933e2e681e00a76ee69de3c28f2c03b70417c"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.070355 4662 generic.go:334] "Generic (PLEG): container finished" podID="eb19e016-973d-4294-843d-2b24ba836b74" containerID="ab34acb29a855f27121faee9f328a7ffd448c88027fb33baa45224b76437b0f9" exitCode=0 Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.070462 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" event={"ID":"eb19e016-973d-4294-843d-2b24ba836b74","Type":"ContainerDied","Data":"ab34acb29a855f27121faee9f328a7ffd448c88027fb33baa45224b76437b0f9"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.073588 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z4wmv" event={"ID":"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc","Type":"ContainerStarted","Data":"4756f582ca2664d59b65666b90aabd23f666802fcee1bcdb69a56eaf9c0f7e72"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.077552 4662 generic.go:334] "Generic (PLEG): container finished" podID="b11e546d-17fd-4aae-a904-879bf7264818" containerID="17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133" exitCode=0 Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.077613 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" event={"ID":"b11e546d-17fd-4aae-a904-879bf7264818","Type":"ContainerDied","Data":"17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.077637 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" event={"ID":"b11e546d-17fd-4aae-a904-879bf7264818","Type":"ContainerStarted","Data":"6f40b02cc611bfbd5cc86fa15950236fb65a837401e3934fef86186a70aef775"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.084015 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-cgx89" podStartSLOduration=3.08399517 podStartE2EDuration="3.08399517s" podCreationTimestamp="2025-12-08 09:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:12.08251053 +0000 UTC m=+995.651538540" watchObservedRunningTime="2025-12-08 09:31:12.08399517 +0000 UTC m=+995.653023160" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.091333 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xd547" event={"ID":"a3cedcd2-d44c-4c45-acc9-384d45424740","Type":"ContainerStarted","Data":"edd45a678288b4a3c43f1da8ccd0cf7785ba5700529d0d362f838c27c74b00da"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.113447 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m9kkg" event={"ID":"38b4ea33-84ea-4abf-9afd-3bb571219409","Type":"ContainerStarted","Data":"02c7a2b96844cfeda772d85971739253ca38640415beb5e67820bde3741666b3"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.115443 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4v9mq" event={"ID":"d2234708-fc8c-42ec-91f3-8dfaffca750e","Type":"ContainerStarted","Data":"6a9596dae62283eaf1cbd95781fbc3bb7b9af01997c475ac2ef9323e6f64a1a5"} Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.221842 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-m9kkg" podStartSLOduration=3.221818536 podStartE2EDuration="3.221818536s" podCreationTimestamp="2025-12-08 09:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:12.203425569 +0000 UTC m=+995.772453559" watchObservedRunningTime="2025-12-08 09:31:12.221818536 +0000 UTC m=+995.790846526" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.561977 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.721473 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-dns-svc\") pod \"eb19e016-973d-4294-843d-2b24ba836b74\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.721675 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-sb\") pod \"eb19e016-973d-4294-843d-2b24ba836b74\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.721701 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-nb\") pod \"eb19e016-973d-4294-843d-2b24ba836b74\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.721727 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrnl9\" (UniqueName: \"kubernetes.io/projected/eb19e016-973d-4294-843d-2b24ba836b74-kube-api-access-wrnl9\") pod \"eb19e016-973d-4294-843d-2b24ba836b74\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.721790 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-config\") pod \"eb19e016-973d-4294-843d-2b24ba836b74\" (UID: \"eb19e016-973d-4294-843d-2b24ba836b74\") " Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.759214 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eb19e016-973d-4294-843d-2b24ba836b74" (UID: "eb19e016-973d-4294-843d-2b24ba836b74"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.764003 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb19e016-973d-4294-843d-2b24ba836b74-kube-api-access-wrnl9" (OuterVolumeSpecName: "kube-api-access-wrnl9") pod "eb19e016-973d-4294-843d-2b24ba836b74" (UID: "eb19e016-973d-4294-843d-2b24ba836b74"). InnerVolumeSpecName "kube-api-access-wrnl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.764632 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-config" (OuterVolumeSpecName: "config") pod "eb19e016-973d-4294-843d-2b24ba836b74" (UID: "eb19e016-973d-4294-843d-2b24ba836b74"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.777125 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eb19e016-973d-4294-843d-2b24ba836b74" (UID: "eb19e016-973d-4294-843d-2b24ba836b74"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.777245 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eb19e016-973d-4294-843d-2b24ba836b74" (UID: "eb19e016-973d-4294-843d-2b24ba836b74"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.824109 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.824140 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.824151 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrnl9\" (UniqueName: \"kubernetes.io/projected/eb19e016-973d-4294-843d-2b24ba836b74-kube-api-access-wrnl9\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.824162 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:12 crc kubenswrapper[4662]: I1208 09:31:12.824171 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eb19e016-973d-4294-843d-2b24ba836b74-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.126348 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" event={"ID":"eb19e016-973d-4294-843d-2b24ba836b74","Type":"ContainerDied","Data":"c82cb246bcfc942939d659802e33557b61b30e37bdfea55a0595d9eec0cde290"} Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.126587 4662 scope.go:117] "RemoveContainer" containerID="ab34acb29a855f27121faee9f328a7ffd448c88027fb33baa45224b76437b0f9" Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.126696 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66fbd85b65-6ck72" Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.157541 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" event={"ID":"b11e546d-17fd-4aae-a904-879bf7264818","Type":"ContainerStarted","Data":"548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838"} Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.257557 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-6ck72"] Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.282814 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-66fbd85b65-6ck72"] Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.285309 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" podStartSLOduration=4.285294186 podStartE2EDuration="4.285294186s" podCreationTimestamp="2025-12-08 09:31:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:13.267217518 +0000 UTC m=+996.836245508" watchObservedRunningTime="2025-12-08 09:31:13.285294186 +0000 UTC m=+996.854322176" Dec 08 09:31:13 crc kubenswrapper[4662]: I1208 09:31:13.706268 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:31:14 crc kubenswrapper[4662]: I1208 09:31:14.189122 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:14 crc kubenswrapper[4662]: I1208 09:31:14.721446 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb19e016-973d-4294-843d-2b24ba836b74" path="/var/lib/kubelet/pods/eb19e016-973d-4294-843d-2b24ba836b74/volumes" Dec 08 09:31:15 crc kubenswrapper[4662]: I1208 09:31:15.208432 4662 generic.go:334] "Generic (PLEG): container finished" podID="2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" containerID="0fdeca8043e91d3ea9633100c4c128efca4e9a1a499252a06dd77f8ad170b14e" exitCode=0 Dec 08 09:31:15 crc kubenswrapper[4662]: I1208 09:31:15.208523 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bkngw" event={"ID":"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e","Type":"ContainerDied","Data":"0fdeca8043e91d3ea9633100c4c128efca4e9a1a499252a06dd77f8ad170b14e"} Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.250118 4662 generic.go:334] "Generic (PLEG): container finished" podID="38b4ea33-84ea-4abf-9afd-3bb571219409" containerID="02c7a2b96844cfeda772d85971739253ca38640415beb5e67820bde3741666b3" exitCode=0 Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.250294 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m9kkg" event={"ID":"38b4ea33-84ea-4abf-9afd-3bb571219409","Type":"ContainerDied","Data":"02c7a2b96844cfeda772d85971739253ca38640415beb5e67820bde3741666b3"} Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.456086 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bkngw" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.602307 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-config-data\") pod \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.602373 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf2sb\" (UniqueName: \"kubernetes.io/projected/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-kube-api-access-pf2sb\") pod \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.602436 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-db-sync-config-data\") pod \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.602499 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-combined-ca-bundle\") pod \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\" (UID: \"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e\") " Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.610460 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" (UID: "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.611211 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-kube-api-access-pf2sb" (OuterVolumeSpecName: "kube-api-access-pf2sb") pod "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" (UID: "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e"). InnerVolumeSpecName "kube-api-access-pf2sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.632171 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" (UID: "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.663260 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-config-data" (OuterVolumeSpecName: "config-data") pod "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" (UID: "2e4318c6-7d18-41a3-92aa-6cbc4c99b79e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.704559 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.704589 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf2sb\" (UniqueName: \"kubernetes.io/projected/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-kube-api-access-pf2sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.704598 4662 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:19 crc kubenswrapper[4662]: I1208 09:31:19.704606 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:20 crc kubenswrapper[4662]: I1208 09:31:20.202387 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:31:20 crc kubenswrapper[4662]: I1208 09:31:20.262022 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zrgj8"] Dec 08 09:31:20 crc kubenswrapper[4662]: I1208 09:31:20.262356 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-zrgj8" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" containerID="cri-o://28ecba70f752a5a6e7b3f64fc9bdc43570a089977affc86fb0627755603f1fdb" gracePeriod=10 Dec 08 09:31:20 crc kubenswrapper[4662]: I1208 09:31:20.308624 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bkngw" event={"ID":"2e4318c6-7d18-41a3-92aa-6cbc4c99b79e","Type":"ContainerDied","Data":"8a9531c09dad394e5d8f4ffa6726377098e60b8c30ed6647fadf40ec4918dc2b"} Dec 08 09:31:20 crc kubenswrapper[4662]: I1208 09:31:20.309093 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a9531c09dad394e5d8f4ffa6726377098e60b8c30ed6647fadf40ec4918dc2b" Dec 08 09:31:20 crc kubenswrapper[4662]: I1208 09:31:20.308687 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bkngw" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.153245 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-glvjm"] Dec 08 09:31:21 crc kubenswrapper[4662]: E1208 09:31:21.158263 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb19e016-973d-4294-843d-2b24ba836b74" containerName="init" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.158297 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb19e016-973d-4294-843d-2b24ba836b74" containerName="init" Dec 08 09:31:21 crc kubenswrapper[4662]: E1208 09:31:21.158324 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" containerName="glance-db-sync" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.158331 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" containerName="glance-db-sync" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.158579 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" containerName="glance-db-sync" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.158594 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb19e016-973d-4294-843d-2b24ba836b74" containerName="init" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.159432 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.187813 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-glvjm"] Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.240452 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.240485 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-config\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.240523 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.240575 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2plg\" (UniqueName: \"kubernetes.io/projected/7c76993f-f2fc-49d8-9604-f26007c1091b-kube-api-access-h2plg\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.240603 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.347239 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.347281 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-config\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.347317 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.347367 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2plg\" (UniqueName: \"kubernetes.io/projected/7c76993f-f2fc-49d8-9604-f26007c1091b-kube-api-access-h2plg\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.347392 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.348904 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-config\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.348935 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-sb\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.349479 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-nb\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.349502 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-dns-svc\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.358285 4662 generic.go:334] "Generic (PLEG): container finished" podID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerID="28ecba70f752a5a6e7b3f64fc9bdc43570a089977affc86fb0627755603f1fdb" exitCode=0 Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.358320 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zrgj8" event={"ID":"bd6ad89b-50ff-42c9-91a1-36f23be3568b","Type":"ContainerDied","Data":"28ecba70f752a5a6e7b3f64fc9bdc43570a089977affc86fb0627755603f1fdb"} Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.369029 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2plg\" (UniqueName: \"kubernetes.io/projected/7c76993f-f2fc-49d8-9604-f26007c1091b-kube-api-access-h2plg\") pod \"dnsmasq-dns-5b6dbdb6f5-glvjm\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:21 crc kubenswrapper[4662]: I1208 09:31:21.514141 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:24 crc kubenswrapper[4662]: I1208 09:31:24.186275 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-zrgj8" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Dec 08 09:31:27 crc kubenswrapper[4662]: E1208 09:31:27.128480 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Dec 08 09:31:27 crc kubenswrapper[4662]: E1208 09:31:27.129106 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n64h55h5c7hbdhf5hc6hc4h69h94h5c8h5d4h64h546h698h56bh9h69h8dh67bhf9h59ch5c8hbdh556hdhd4h666hfh77h556h64fh5d6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7v9c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(67a7de60-620b-4857-839c-4587bf1cab11): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:31:34 crc kubenswrapper[4662]: I1208 09:31:34.186162 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-zrgj8" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Dec 08 09:31:39 crc kubenswrapper[4662]: I1208 09:31:39.187722 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-zrgj8" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Dec 08 09:31:39 crc kubenswrapper[4662]: I1208 09:31:39.188735 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.309298 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.369011 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-credential-keys\") pod \"38b4ea33-84ea-4abf-9afd-3bb571219409\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.369109 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4k5k\" (UniqueName: \"kubernetes.io/projected/38b4ea33-84ea-4abf-9afd-3bb571219409-kube-api-access-f4k5k\") pod \"38b4ea33-84ea-4abf-9afd-3bb571219409\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.369210 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-scripts\") pod \"38b4ea33-84ea-4abf-9afd-3bb571219409\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.369243 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-combined-ca-bundle\") pod \"38b4ea33-84ea-4abf-9afd-3bb571219409\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.369276 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-fernet-keys\") pod \"38b4ea33-84ea-4abf-9afd-3bb571219409\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.369369 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-config-data\") pod \"38b4ea33-84ea-4abf-9afd-3bb571219409\" (UID: \"38b4ea33-84ea-4abf-9afd-3bb571219409\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.375245 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "38b4ea33-84ea-4abf-9afd-3bb571219409" (UID: "38b4ea33-84ea-4abf-9afd-3bb571219409"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.376939 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-scripts" (OuterVolumeSpecName: "scripts") pod "38b4ea33-84ea-4abf-9afd-3bb571219409" (UID: "38b4ea33-84ea-4abf-9afd-3bb571219409"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.377009 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38b4ea33-84ea-4abf-9afd-3bb571219409-kube-api-access-f4k5k" (OuterVolumeSpecName: "kube-api-access-f4k5k") pod "38b4ea33-84ea-4abf-9afd-3bb571219409" (UID: "38b4ea33-84ea-4abf-9afd-3bb571219409"). InnerVolumeSpecName "kube-api-access-f4k5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.377327 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "38b4ea33-84ea-4abf-9afd-3bb571219409" (UID: "38b4ea33-84ea-4abf-9afd-3bb571219409"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.401553 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-config-data" (OuterVolumeSpecName: "config-data") pod "38b4ea33-84ea-4abf-9afd-3bb571219409" (UID: "38b4ea33-84ea-4abf-9afd-3bb571219409"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.412791 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "38b4ea33-84ea-4abf-9afd-3bb571219409" (UID: "38b4ea33-84ea-4abf-9afd-3bb571219409"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.471093 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.471125 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.471136 4662 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.471144 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.471152 4662 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/38b4ea33-84ea-4abf-9afd-3bb571219409-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.471161 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f4k5k\" (UniqueName: \"kubernetes.io/projected/38b4ea33-84ea-4abf-9afd-3bb571219409-kube-api-access-f4k5k\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.538987 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-m9kkg" event={"ID":"38b4ea33-84ea-4abf-9afd-3bb571219409","Type":"ContainerDied","Data":"23f5d861018f1261de6b81941b34ccac4cdc8b430b13a38aba5bbf004266dab9"} Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.539029 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23f5d861018f1261de6b81941b34ccac4cdc8b430b13a38aba5bbf004266dab9" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.539093 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-m9kkg" Dec 08 09:31:41 crc kubenswrapper[4662]: E1208 09:31:41.873458 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Dec 08 09:31:41 crc kubenswrapper[4662]: E1208 09:31:41.873582 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f6m2g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-xd547_openstack(a3cedcd2-d44c-4c45-acc9-384d45424740): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:31:41 crc kubenswrapper[4662]: E1208 09:31:41.876026 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-xd547" podUID="a3cedcd2-d44c-4c45-acc9-384d45424740" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.881932 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.979037 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxpkp\" (UniqueName: \"kubernetes.io/projected/bd6ad89b-50ff-42c9-91a1-36f23be3568b-kube-api-access-kxpkp\") pod \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.979372 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-nb\") pod \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.979441 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-sb\") pod \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.979459 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-dns-svc\") pod \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.979495 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-config\") pod \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\" (UID: \"bd6ad89b-50ff-42c9-91a1-36f23be3568b\") " Dec 08 09:31:41 crc kubenswrapper[4662]: I1208 09:31:41.984148 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd6ad89b-50ff-42c9-91a1-36f23be3568b-kube-api-access-kxpkp" (OuterVolumeSpecName: "kube-api-access-kxpkp") pod "bd6ad89b-50ff-42c9-91a1-36f23be3568b" (UID: "bd6ad89b-50ff-42c9-91a1-36f23be3568b"). InnerVolumeSpecName "kube-api-access-kxpkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.016648 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "bd6ad89b-50ff-42c9-91a1-36f23be3568b" (UID: "bd6ad89b-50ff-42c9-91a1-36f23be3568b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.017010 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-config" (OuterVolumeSpecName: "config") pod "bd6ad89b-50ff-42c9-91a1-36f23be3568b" (UID: "bd6ad89b-50ff-42c9-91a1-36f23be3568b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.038148 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "bd6ad89b-50ff-42c9-91a1-36f23be3568b" (UID: "bd6ad89b-50ff-42c9-91a1-36f23be3568b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.058082 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "bd6ad89b-50ff-42c9-91a1-36f23be3568b" (UID: "bd6ad89b-50ff-42c9-91a1-36f23be3568b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.082157 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.082188 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxpkp\" (UniqueName: \"kubernetes.io/projected/bd6ad89b-50ff-42c9-91a1-36f23be3568b-kube-api-access-kxpkp\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.082199 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.082224 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.082490 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/bd6ad89b-50ff-42c9-91a1-36f23be3568b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.410908 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-m9kkg"] Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.428769 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-m9kkg"] Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.536928 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-r6jqp"] Dec 08 09:31:42 crc kubenswrapper[4662]: E1208 09:31:42.538147 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38b4ea33-84ea-4abf-9afd-3bb571219409" containerName="keystone-bootstrap" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.538189 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="38b4ea33-84ea-4abf-9afd-3bb571219409" containerName="keystone-bootstrap" Dec 08 09:31:42 crc kubenswrapper[4662]: E1208 09:31:42.538220 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.538227 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" Dec 08 09:31:42 crc kubenswrapper[4662]: E1208 09:31:42.538246 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="init" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.538270 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="init" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.538760 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.538797 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="38b4ea33-84ea-4abf-9afd-3bb571219409" containerName="keystone-bootstrap" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.541170 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6jqp"] Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.541251 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.561427 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.561600 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.561610 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.562931 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.563128 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cm5ws" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.574959 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-zrgj8" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.576419 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-zrgj8" event={"ID":"bd6ad89b-50ff-42c9-91a1-36f23be3568b","Type":"ContainerDied","Data":"fd1890c858a8a04b64038f9512a3065a7a99dca248ef8a72527f13323c9b5b21"} Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.576482 4662 scope.go:117] "RemoveContainer" containerID="28ecba70f752a5a6e7b3f64fc9bdc43570a089977affc86fb0627755603f1fdb" Dec 08 09:31:42 crc kubenswrapper[4662]: E1208 09:31:42.584209 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-xd547" podUID="a3cedcd2-d44c-4c45-acc9-384d45424740" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.593369 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-credential-keys\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.593451 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-scripts\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.593467 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-combined-ca-bundle\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.593503 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-fernet-keys\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.593541 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-config-data\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.593597 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxhz5\" (UniqueName: \"kubernetes.io/projected/46681e20-4717-4812-9c6e-b98bd8630c4c-kube-api-access-jxhz5\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.620642 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zrgj8"] Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.624797 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-zrgj8"] Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.695074 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-fernet-keys\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.695150 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-config-data\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.695220 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxhz5\" (UniqueName: \"kubernetes.io/projected/46681e20-4717-4812-9c6e-b98bd8630c4c-kube-api-access-jxhz5\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.695252 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-credential-keys\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.695299 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-scripts\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.695314 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-combined-ca-bundle\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.700310 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-scripts\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.701400 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-combined-ca-bundle\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.703296 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-fernet-keys\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.712678 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-config-data\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.714334 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxhz5\" (UniqueName: \"kubernetes.io/projected/46681e20-4717-4812-9c6e-b98bd8630c4c-kube-api-access-jxhz5\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.716339 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38b4ea33-84ea-4abf-9afd-3bb571219409" path="/var/lib/kubelet/pods/38b4ea33-84ea-4abf-9afd-3bb571219409/volumes" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.716973 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" path="/var/lib/kubelet/pods/bd6ad89b-50ff-42c9-91a1-36f23be3568b/volumes" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.720284 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-credential-keys\") pod \"keystone-bootstrap-r6jqp\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:42 crc kubenswrapper[4662]: I1208 09:31:42.883275 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:43 crc kubenswrapper[4662]: E1208 09:31:43.479532 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Dec 08 09:31:43 crc kubenswrapper[4662]: E1208 09:31:43.479733 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pn2dz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-z4wmv_openstack(74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:31:43 crc kubenswrapper[4662]: E1208 09:31:43.480976 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-z4wmv" podUID="74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" Dec 08 09:31:43 crc kubenswrapper[4662]: I1208 09:31:43.494581 4662 scope.go:117] "RemoveContainer" containerID="6bf4a74d45bccd42cd0815a91949d7e6da93663d9b9185b596b6666ae81c2cdd" Dec 08 09:31:43 crc kubenswrapper[4662]: E1208 09:31:43.593283 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-z4wmv" podUID="74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" Dec 08 09:31:43 crc kubenswrapper[4662]: I1208 09:31:43.899464 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-glvjm"] Dec 08 09:31:43 crc kubenswrapper[4662]: W1208 09:31:43.954217 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c76993f_f2fc_49d8_9604_f26007c1091b.slice/crio-45a291a6d10d107a406bdea18241ddf562636feef57581afaac064ccc764d51b WatchSource:0}: Error finding container 45a291a6d10d107a406bdea18241ddf562636feef57581afaac064ccc764d51b: Status 404 returned error can't find the container with id 45a291a6d10d107a406bdea18241ddf562636feef57581afaac064ccc764d51b Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.189052 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-zrgj8" podUID="bd6ad89b-50ff-42c9-91a1-36f23be3568b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: i/o timeout" Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.280085 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-r6jqp"] Dec 08 09:31:44 crc kubenswrapper[4662]: W1208 09:31:44.285658 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46681e20_4717_4812_9c6e_b98bd8630c4c.slice/crio-0c10008aeb168c1948b26e879253b0680b2d5a78fb5dba2c0fb15d031e36bd9e WatchSource:0}: Error finding container 0c10008aeb168c1948b26e879253b0680b2d5a78fb5dba2c0fb15d031e36bd9e: Status 404 returned error can't find the container with id 0c10008aeb168c1948b26e879253b0680b2d5a78fb5dba2c0fb15d031e36bd9e Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.601463 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6jqp" event={"ID":"46681e20-4717-4812-9c6e-b98bd8630c4c","Type":"ContainerStarted","Data":"8b8b6233108b9055a6df0a863c65dfafcb8b258b17df895e5a73a780d1b4d15f"} Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.601503 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6jqp" event={"ID":"46681e20-4717-4812-9c6e-b98bd8630c4c","Type":"ContainerStarted","Data":"0c10008aeb168c1948b26e879253b0680b2d5a78fb5dba2c0fb15d031e36bd9e"} Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.607364 4662 generic.go:334] "Generic (PLEG): container finished" podID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerID="e165f271744ecf35396adba64a71cb3a92cfe3744e4a4a20b598d5e1bdf948c2" exitCode=0 Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.607469 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" event={"ID":"7c76993f-f2fc-49d8-9604-f26007c1091b","Type":"ContainerDied","Data":"e165f271744ecf35396adba64a71cb3a92cfe3744e4a4a20b598d5e1bdf948c2"} Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.607500 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" event={"ID":"7c76993f-f2fc-49d8-9604-f26007c1091b","Type":"ContainerStarted","Data":"45a291a6d10d107a406bdea18241ddf562636feef57581afaac064ccc764d51b"} Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.619090 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4v9mq" event={"ID":"d2234708-fc8c-42ec-91f3-8dfaffca750e","Type":"ContainerStarted","Data":"f8d43661c07631e67a33948ce1fc3e9a45e883933da511c18d23c356d79da360"} Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.631733 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerStarted","Data":"323f90a6f60981a0202cc02719fcb667ada692b6b170982ce3b4d5a9c21fbd70"} Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.636060 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-r6jqp" podStartSLOduration=2.636045186 podStartE2EDuration="2.636045186s" podCreationTimestamp="2025-12-08 09:31:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:44.630678481 +0000 UTC m=+1028.199706471" watchObservedRunningTime="2025-12-08 09:31:44.636045186 +0000 UTC m=+1028.205073176" Dec 08 09:31:44 crc kubenswrapper[4662]: I1208 09:31:44.686033 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-4v9mq" podStartSLOduration=3.767721853 podStartE2EDuration="35.686013307s" podCreationTimestamp="2025-12-08 09:31:09 +0000 UTC" firstStartedPulling="2025-12-08 09:31:11.481913653 +0000 UTC m=+995.050941643" lastFinishedPulling="2025-12-08 09:31:43.400205107 +0000 UTC m=+1026.969233097" observedRunningTime="2025-12-08 09:31:44.6750361 +0000 UTC m=+1028.244064090" watchObservedRunningTime="2025-12-08 09:31:44.686013307 +0000 UTC m=+1028.255041297" Dec 08 09:31:45 crc kubenswrapper[4662]: I1208 09:31:45.645919 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" event={"ID":"7c76993f-f2fc-49d8-9604-f26007c1091b","Type":"ContainerStarted","Data":"ab856db3ebac75773a65772c7e66f56398be8c09e064300c4933723c676094ed"} Dec 08 09:31:45 crc kubenswrapper[4662]: I1208 09:31:45.664360 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" podStartSLOduration=24.664342125 podStartE2EDuration="24.664342125s" podCreationTimestamp="2025-12-08 09:31:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:45.66042589 +0000 UTC m=+1029.229453910" watchObservedRunningTime="2025-12-08 09:31:45.664342125 +0000 UTC m=+1029.233370105" Dec 08 09:31:46 crc kubenswrapper[4662]: I1208 09:31:46.515336 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:47 crc kubenswrapper[4662]: I1208 09:31:47.674912 4662 generic.go:334] "Generic (PLEG): container finished" podID="d2234708-fc8c-42ec-91f3-8dfaffca750e" containerID="f8d43661c07631e67a33948ce1fc3e9a45e883933da511c18d23c356d79da360" exitCode=0 Dec 08 09:31:47 crc kubenswrapper[4662]: I1208 09:31:47.674996 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4v9mq" event={"ID":"d2234708-fc8c-42ec-91f3-8dfaffca750e","Type":"ContainerDied","Data":"f8d43661c07631e67a33948ce1fc3e9a45e883933da511c18d23c356d79da360"} Dec 08 09:31:48 crc kubenswrapper[4662]: I1208 09:31:48.687530 4662 generic.go:334] "Generic (PLEG): container finished" podID="34dda116-fa4c-44db-bfd6-0077e8060c33" containerID="964b26ff1d4b27498f7b3742cb7933e2e681e00a76ee69de3c28f2c03b70417c" exitCode=0 Dec 08 09:31:48 crc kubenswrapper[4662]: I1208 09:31:48.687612 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgx89" event={"ID":"34dda116-fa4c-44db-bfd6-0077e8060c33","Type":"ContainerDied","Data":"964b26ff1d4b27498f7b3742cb7933e2e681e00a76ee69de3c28f2c03b70417c"} Dec 08 09:31:48 crc kubenswrapper[4662]: I1208 09:31:48.689802 4662 generic.go:334] "Generic (PLEG): container finished" podID="46681e20-4717-4812-9c6e-b98bd8630c4c" containerID="8b8b6233108b9055a6df0a863c65dfafcb8b258b17df895e5a73a780d1b4d15f" exitCode=0 Dec 08 09:31:48 crc kubenswrapper[4662]: I1208 09:31:48.689956 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6jqp" event={"ID":"46681e20-4717-4812-9c6e-b98bd8630c4c","Type":"ContainerDied","Data":"8b8b6233108b9055a6df0a863c65dfafcb8b258b17df895e5a73a780d1b4d15f"} Dec 08 09:31:49 crc kubenswrapper[4662]: I1208 09:31:49.862829 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.054537 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brkpx\" (UniqueName: \"kubernetes.io/projected/d2234708-fc8c-42ec-91f3-8dfaffca750e-kube-api-access-brkpx\") pod \"d2234708-fc8c-42ec-91f3-8dfaffca750e\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.054644 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-scripts\") pod \"d2234708-fc8c-42ec-91f3-8dfaffca750e\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.054676 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-config-data\") pod \"d2234708-fc8c-42ec-91f3-8dfaffca750e\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.054736 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-combined-ca-bundle\") pod \"d2234708-fc8c-42ec-91f3-8dfaffca750e\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.054804 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2234708-fc8c-42ec-91f3-8dfaffca750e-logs\") pod \"d2234708-fc8c-42ec-91f3-8dfaffca750e\" (UID: \"d2234708-fc8c-42ec-91f3-8dfaffca750e\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.056196 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2234708-fc8c-42ec-91f3-8dfaffca750e-logs" (OuterVolumeSpecName: "logs") pod "d2234708-fc8c-42ec-91f3-8dfaffca750e" (UID: "d2234708-fc8c-42ec-91f3-8dfaffca750e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.075249 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2234708-fc8c-42ec-91f3-8dfaffca750e-kube-api-access-brkpx" (OuterVolumeSpecName: "kube-api-access-brkpx") pod "d2234708-fc8c-42ec-91f3-8dfaffca750e" (UID: "d2234708-fc8c-42ec-91f3-8dfaffca750e"). InnerVolumeSpecName "kube-api-access-brkpx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.086024 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-scripts" (OuterVolumeSpecName: "scripts") pod "d2234708-fc8c-42ec-91f3-8dfaffca750e" (UID: "d2234708-fc8c-42ec-91f3-8dfaffca750e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.104807 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2234708-fc8c-42ec-91f3-8dfaffca750e" (UID: "d2234708-fc8c-42ec-91f3-8dfaffca750e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.127301 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-config-data" (OuterVolumeSpecName: "config-data") pod "d2234708-fc8c-42ec-91f3-8dfaffca750e" (UID: "d2234708-fc8c-42ec-91f3-8dfaffca750e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.145187 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.156438 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d2234708-fc8c-42ec-91f3-8dfaffca750e-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.156470 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brkpx\" (UniqueName: \"kubernetes.io/projected/d2234708-fc8c-42ec-91f3-8dfaffca750e-kube-api-access-brkpx\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.156482 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.156494 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.156505 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2234708-fc8c-42ec-91f3-8dfaffca750e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.229880 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.258157 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-config\") pod \"34dda116-fa4c-44db-bfd6-0077e8060c33\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.258474 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw57g\" (UniqueName: \"kubernetes.io/projected/34dda116-fa4c-44db-bfd6-0077e8060c33-kube-api-access-dw57g\") pod \"34dda116-fa4c-44db-bfd6-0077e8060c33\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.258523 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-combined-ca-bundle\") pod \"34dda116-fa4c-44db-bfd6-0077e8060c33\" (UID: \"34dda116-fa4c-44db-bfd6-0077e8060c33\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.263047 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34dda116-fa4c-44db-bfd6-0077e8060c33-kube-api-access-dw57g" (OuterVolumeSpecName: "kube-api-access-dw57g") pod "34dda116-fa4c-44db-bfd6-0077e8060c33" (UID: "34dda116-fa4c-44db-bfd6-0077e8060c33"). InnerVolumeSpecName "kube-api-access-dw57g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.263345 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw57g\" (UniqueName: \"kubernetes.io/projected/34dda116-fa4c-44db-bfd6-0077e8060c33-kube-api-access-dw57g\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.282029 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-config" (OuterVolumeSpecName: "config") pod "34dda116-fa4c-44db-bfd6-0077e8060c33" (UID: "34dda116-fa4c-44db-bfd6-0077e8060c33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.282072 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "34dda116-fa4c-44db-bfd6-0077e8060c33" (UID: "34dda116-fa4c-44db-bfd6-0077e8060c33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.364645 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-combined-ca-bundle\") pod \"46681e20-4717-4812-9c6e-b98bd8630c4c\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.364809 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-scripts\") pod \"46681e20-4717-4812-9c6e-b98bd8630c4c\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.364835 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-config-data\") pod \"46681e20-4717-4812-9c6e-b98bd8630c4c\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.364874 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-credential-keys\") pod \"46681e20-4717-4812-9c6e-b98bd8630c4c\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.364955 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxhz5\" (UniqueName: \"kubernetes.io/projected/46681e20-4717-4812-9c6e-b98bd8630c4c-kube-api-access-jxhz5\") pod \"46681e20-4717-4812-9c6e-b98bd8630c4c\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.365080 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-fernet-keys\") pod \"46681e20-4717-4812-9c6e-b98bd8630c4c\" (UID: \"46681e20-4717-4812-9c6e-b98bd8630c4c\") " Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.365543 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.365569 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/34dda116-fa4c-44db-bfd6-0077e8060c33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.368533 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-scripts" (OuterVolumeSpecName: "scripts") pod "46681e20-4717-4812-9c6e-b98bd8630c4c" (UID: "46681e20-4717-4812-9c6e-b98bd8630c4c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.368829 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "46681e20-4717-4812-9c6e-b98bd8630c4c" (UID: "46681e20-4717-4812-9c6e-b98bd8630c4c"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.369515 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "46681e20-4717-4812-9c6e-b98bd8630c4c" (UID: "46681e20-4717-4812-9c6e-b98bd8630c4c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.370197 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46681e20-4717-4812-9c6e-b98bd8630c4c-kube-api-access-jxhz5" (OuterVolumeSpecName: "kube-api-access-jxhz5") pod "46681e20-4717-4812-9c6e-b98bd8630c4c" (UID: "46681e20-4717-4812-9c6e-b98bd8630c4c"). InnerVolumeSpecName "kube-api-access-jxhz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.387357 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-config-data" (OuterVolumeSpecName: "config-data") pod "46681e20-4717-4812-9c6e-b98bd8630c4c" (UID: "46681e20-4717-4812-9c6e-b98bd8630c4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.400730 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "46681e20-4717-4812-9c6e-b98bd8630c4c" (UID: "46681e20-4717-4812-9c6e-b98bd8630c4c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.466563 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.466593 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.466602 4662 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-credential-keys\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.466614 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxhz5\" (UniqueName: \"kubernetes.io/projected/46681e20-4717-4812-9c6e-b98bd8630c4c-kube-api-access-jxhz5\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.466623 4662 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-fernet-keys\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.466650 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46681e20-4717-4812-9c6e-b98bd8630c4c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.740572 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-4v9mq" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.740577 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-4v9mq" event={"ID":"d2234708-fc8c-42ec-91f3-8dfaffca750e","Type":"ContainerDied","Data":"6a9596dae62283eaf1cbd95781fbc3bb7b9af01997c475ac2ef9323e6f64a1a5"} Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.741541 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9596dae62283eaf1cbd95781fbc3bb7b9af01997c475ac2ef9323e6f64a1a5" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.743220 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerStarted","Data":"ed9403d5a7fab5224cb239b7c48ac7a47b4bd6fa87a2e6951faa26cdfd6ce8b4"} Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.749290 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-cgx89" event={"ID":"34dda116-fa4c-44db-bfd6-0077e8060c33","Type":"ContainerDied","Data":"d10707659fa77e53397fbd2d762d3464543050943c5f8bbf5a392d410bc46715"} Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.749342 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d10707659fa77e53397fbd2d762d3464543050943c5f8bbf5a392d410bc46715" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.749425 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-cgx89" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.771795 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-r6jqp" event={"ID":"46681e20-4717-4812-9c6e-b98bd8630c4c","Type":"ContainerDied","Data":"0c10008aeb168c1948b26e879253b0680b2d5a78fb5dba2c0fb15d031e36bd9e"} Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.771846 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c10008aeb168c1948b26e879253b0680b2d5a78fb5dba2c0fb15d031e36bd9e" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.771901 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-r6jqp" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.928003 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-glvjm"] Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.928292 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerName="dnsmasq-dns" containerID="cri-o://ab856db3ebac75773a65772c7e66f56398be8c09e064300c4933723c676094ed" gracePeriod=10 Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.932038 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.956538 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6c86ffd5b9-ffgx7"] Dec 08 09:31:50 crc kubenswrapper[4662]: E1208 09:31:50.956898 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46681e20-4717-4812-9c6e-b98bd8630c4c" containerName="keystone-bootstrap" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.956910 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="46681e20-4717-4812-9c6e-b98bd8630c4c" containerName="keystone-bootstrap" Dec 08 09:31:50 crc kubenswrapper[4662]: E1208 09:31:50.956921 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2234708-fc8c-42ec-91f3-8dfaffca750e" containerName="placement-db-sync" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.956928 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2234708-fc8c-42ec-91f3-8dfaffca750e" containerName="placement-db-sync" Dec 08 09:31:50 crc kubenswrapper[4662]: E1208 09:31:50.956945 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34dda116-fa4c-44db-bfd6-0077e8060c33" containerName="neutron-db-sync" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.956952 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="34dda116-fa4c-44db-bfd6-0077e8060c33" containerName="neutron-db-sync" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.957104 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="46681e20-4717-4812-9c6e-b98bd8630c4c" containerName="keystone-bootstrap" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.957123 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="34dda116-fa4c-44db-bfd6-0077e8060c33" containerName="neutron-db-sync" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.957131 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2234708-fc8c-42ec-91f3-8dfaffca750e" containerName="placement-db-sync" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.957713 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.971234 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.971391 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.971428 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.971574 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.971731 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.975726 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c86ffd5b9-ffgx7"] Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.976623 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-cm5ws" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.976706 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-config-data\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.976779 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-scripts\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.976831 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-public-tls-certs\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.976901 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-credential-keys\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.976953 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2xvn\" (UniqueName: \"kubernetes.io/projected/bbc36380-3a09-4705-9c58-6795b96b8199-kube-api-access-p2xvn\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.977103 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-internal-tls-certs\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.977143 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-combined-ca-bundle\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:50 crc kubenswrapper[4662]: I1208 09:31:50.977188 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-fernet-keys\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.043870 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rx4kl"] Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.045528 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.077214 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rx4kl"] Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.077645 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-internal-tls-certs\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.077718 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-combined-ca-bundle\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.077805 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-fernet-keys\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.077896 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-config-data\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.077959 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078027 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-config\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078087 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-scripts\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078159 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-public-tls-certs\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078274 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-credential-keys\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078342 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078414 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2xvn\" (UniqueName: \"kubernetes.io/projected/bbc36380-3a09-4705-9c58-6795b96b8199-kube-api-access-p2xvn\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078477 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.078550 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47w5r\" (UniqueName: \"kubernetes.io/projected/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-kube-api-access-47w5r\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.093281 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6775bc75d4-c5zmq"] Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.094891 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.106889 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-internal-tls-certs\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.107454 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.107721 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.107941 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.108041 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.108206 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-vssmw" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.109679 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-public-tls-certs\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.110088 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-credential-keys\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.110356 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2xvn\" (UniqueName: \"kubernetes.io/projected/bbc36380-3a09-4705-9c58-6795b96b8199-kube-api-access-p2xvn\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.114452 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-config-data\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.114981 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-combined-ca-bundle\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.115865 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-scripts\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.116302 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bbc36380-3a09-4705-9c58-6795b96b8199-fernet-keys\") pod \"keystone-6c86ffd5b9-ffgx7\" (UID: \"bbc36380-3a09-4705-9c58-6795b96b8199\") " pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.173366 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6775bc75d4-c5zmq"] Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.180507 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.181452 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-sb\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.181904 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.181959 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47w5r\" (UniqueName: \"kubernetes.io/projected/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-kube-api-access-47w5r\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.182089 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.182132 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-config\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.183052 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-config\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.183576 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-nb\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.184183 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-dns-svc\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.229918 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47w5r\" (UniqueName: \"kubernetes.io/projected/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-kube-api-access-47w5r\") pod \"dnsmasq-dns-5f66db59b9-rx4kl\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.283408 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-config-data\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.283452 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-internal-tls-certs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.283527 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-scripts\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.283546 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-public-tls-certs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.283570 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-combined-ca-bundle\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.283600 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcmrn\" (UniqueName: \"kubernetes.io/projected/0dcf5d14-7976-45be-bc8a-2a551cf2babc-kube-api-access-mcmrn\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.283617 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dcf5d14-7976-45be-bc8a-2a551cf2babc-logs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.302188 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.382464 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.389627 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-public-tls-certs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.389691 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-combined-ca-bundle\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.389729 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcmrn\" (UniqueName: \"kubernetes.io/projected/0dcf5d14-7976-45be-bc8a-2a551cf2babc-kube-api-access-mcmrn\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.389758 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dcf5d14-7976-45be-bc8a-2a551cf2babc-logs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.389791 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-config-data\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.389816 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-internal-tls-certs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.389891 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-scripts\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.396845 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-public-tls-certs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.400271 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-internal-tls-certs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.400503 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-scripts\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.400735 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0dcf5d14-7976-45be-bc8a-2a551cf2babc-logs\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.403547 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-config-data\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.405350 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0dcf5d14-7976-45be-bc8a-2a551cf2babc-combined-ca-bundle\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.425813 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c65c7b9c6-qttgk"] Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.427611 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.437696 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qrkj4" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.438010 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.445003 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.445177 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c65c7b9c6-qttgk"] Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.450058 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.487479 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcmrn\" (UniqueName: \"kubernetes.io/projected/0dcf5d14-7976-45be-bc8a-2a551cf2babc-kube-api-access-mcmrn\") pod \"placement-6775bc75d4-c5zmq\" (UID: \"0dcf5d14-7976-45be-bc8a-2a551cf2babc\") " pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.520877 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: connect: connection refused" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.593811 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w45fx\" (UniqueName: \"kubernetes.io/projected/7958f2ae-4e08-4879-8235-8ae34a4ea86b-kube-api-access-w45fx\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.593894 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-combined-ca-bundle\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.593933 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-config\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.593963 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-ovndb-tls-certs\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.594005 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-httpd-config\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.641158 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.701087 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-combined-ca-bundle\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.701166 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-config\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.701211 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-ovndb-tls-certs\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.701269 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-httpd-config\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.701318 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w45fx\" (UniqueName: \"kubernetes.io/projected/7958f2ae-4e08-4879-8235-8ae34a4ea86b-kube-api-access-w45fx\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.734996 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-httpd-config\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.739973 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-config\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.742644 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-ovndb-tls-certs\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.746550 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-combined-ca-bundle\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.752931 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w45fx\" (UniqueName: \"kubernetes.io/projected/7958f2ae-4e08-4879-8235-8ae34a4ea86b-kube-api-access-w45fx\") pod \"neutron-5c65c7b9c6-qttgk\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.790168 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.820151 4662 generic.go:334] "Generic (PLEG): container finished" podID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerID="ab856db3ebac75773a65772c7e66f56398be8c09e064300c4933723c676094ed" exitCode=0 Dec 08 09:31:51 crc kubenswrapper[4662]: I1208 09:31:51.820205 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" event={"ID":"7c76993f-f2fc-49d8-9604-f26007c1091b","Type":"ContainerDied","Data":"ab856db3ebac75773a65772c7e66f56398be8c09e064300c4933723c676094ed"} Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.167326 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6c86ffd5b9-ffgx7"] Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.322002 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.406499 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rx4kl"] Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.482499 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2plg\" (UniqueName: \"kubernetes.io/projected/7c76993f-f2fc-49d8-9604-f26007c1091b-kube-api-access-h2plg\") pod \"7c76993f-f2fc-49d8-9604-f26007c1091b\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.482538 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-config\") pod \"7c76993f-f2fc-49d8-9604-f26007c1091b\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.482586 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-dns-svc\") pod \"7c76993f-f2fc-49d8-9604-f26007c1091b\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.482634 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-nb\") pod \"7c76993f-f2fc-49d8-9604-f26007c1091b\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.482726 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-sb\") pod \"7c76993f-f2fc-49d8-9604-f26007c1091b\" (UID: \"7c76993f-f2fc-49d8-9604-f26007c1091b\") " Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.508482 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c76993f-f2fc-49d8-9604-f26007c1091b-kube-api-access-h2plg" (OuterVolumeSpecName: "kube-api-access-h2plg") pod "7c76993f-f2fc-49d8-9604-f26007c1091b" (UID: "7c76993f-f2fc-49d8-9604-f26007c1091b"). InnerVolumeSpecName "kube-api-access-h2plg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.589869 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2plg\" (UniqueName: \"kubernetes.io/projected/7c76993f-f2fc-49d8-9604-f26007c1091b-kube-api-access-h2plg\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.594436 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-config" (OuterVolumeSpecName: "config") pod "7c76993f-f2fc-49d8-9604-f26007c1091b" (UID: "7c76993f-f2fc-49d8-9604-f26007c1091b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.622736 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6775bc75d4-c5zmq"] Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.679441 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7c76993f-f2fc-49d8-9604-f26007c1091b" (UID: "7c76993f-f2fc-49d8-9604-f26007c1091b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.689965 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7c76993f-f2fc-49d8-9604-f26007c1091b" (UID: "7c76993f-f2fc-49d8-9604-f26007c1091b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.691008 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.691028 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.691038 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.706676 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7c76993f-f2fc-49d8-9604-f26007c1091b" (UID: "7c76993f-f2fc-49d8-9604-f26007c1091b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.728754 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c65c7b9c6-qttgk"] Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.792897 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7c76993f-f2fc-49d8-9604-f26007c1091b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.861389 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c86ffd5b9-ffgx7" event={"ID":"bbc36380-3a09-4705-9c58-6795b96b8199","Type":"ContainerStarted","Data":"62d5c05206238884e2a81f92ee2e1b480217d5a87193eceaddd6d2bb79ef5f8d"} Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.861728 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6c86ffd5b9-ffgx7" event={"ID":"bbc36380-3a09-4705-9c58-6795b96b8199","Type":"ContainerStarted","Data":"2c1a4a39c7ea950d1085e4e1866949bc13718cdbd398d35bcf77411c5aff91af"} Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.863565 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.890795 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" event={"ID":"68ba3ed7-3419-4918-bf9e-1ed15b0a2596","Type":"ContainerStarted","Data":"68c7920cbe137a735ca1ed0c845f0b88565468761e3eef669be353bbd8fcd3a6"} Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.904556 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6c86ffd5b9-ffgx7" podStartSLOduration=2.904539498 podStartE2EDuration="2.904539498s" podCreationTimestamp="2025-12-08 09:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:52.898907576 +0000 UTC m=+1036.467935566" watchObservedRunningTime="2025-12-08 09:31:52.904539498 +0000 UTC m=+1036.473567488" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.908369 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.908465 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b6dbdb6f5-glvjm" event={"ID":"7c76993f-f2fc-49d8-9604-f26007c1091b","Type":"ContainerDied","Data":"45a291a6d10d107a406bdea18241ddf562636feef57581afaac064ccc764d51b"} Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.908518 4662 scope.go:117] "RemoveContainer" containerID="ab856db3ebac75773a65772c7e66f56398be8c09e064300c4933723c676094ed" Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.910912 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6775bc75d4-c5zmq" event={"ID":"0dcf5d14-7976-45be-bc8a-2a551cf2babc","Type":"ContainerStarted","Data":"53f273e1cbd5a1e2cd2eff4d7b74a1a0a3d2942efb32b1918a0d7805bb1acf68"} Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.916754 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c65c7b9c6-qttgk" event={"ID":"7958f2ae-4e08-4879-8235-8ae34a4ea86b","Type":"ContainerStarted","Data":"0c418e0cc5b323054b3696b9f20f04ea26d80fe94725567b898380887726aae5"} Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.956829 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-glvjm"] Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.967527 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b6dbdb6f5-glvjm"] Dec 08 09:31:52 crc kubenswrapper[4662]: I1208 09:31:52.987622 4662 scope.go:117] "RemoveContainer" containerID="e165f271744ecf35396adba64a71cb3a92cfe3744e4a4a20b598d5e1bdf948c2" Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.928120 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c65c7b9c6-qttgk" event={"ID":"7958f2ae-4e08-4879-8235-8ae34a4ea86b","Type":"ContainerStarted","Data":"02eca89352fc8db4427492de988af39bd978d91dfb33f2bd8c676068678e2807"} Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.929764 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c65c7b9c6-qttgk" event={"ID":"7958f2ae-4e08-4879-8235-8ae34a4ea86b","Type":"ContainerStarted","Data":"815602ddb771eb81949d3080aadfb9816e9f10a154a126dafcde60f9edd25a33"} Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.929903 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.931014 4662 generic.go:334] "Generic (PLEG): container finished" podID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerID="9eb8845ba776f6e57b60bace9f527d14a610cd4ac59ff086e22879520bdfe72e" exitCode=0 Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.931130 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" event={"ID":"68ba3ed7-3419-4918-bf9e-1ed15b0a2596","Type":"ContainerDied","Data":"9eb8845ba776f6e57b60bace9f527d14a610cd4ac59ff086e22879520bdfe72e"} Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.970612 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6775bc75d4-c5zmq" event={"ID":"0dcf5d14-7976-45be-bc8a-2a551cf2babc","Type":"ContainerStarted","Data":"19ca4bfa90ecb5f29f2750851faad7248a8f6bb3c5c068f6d3fda6d328497aa9"} Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.971041 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6775bc75d4-c5zmq" event={"ID":"0dcf5d14-7976-45be-bc8a-2a551cf2babc","Type":"ContainerStarted","Data":"b1871efe1d5c562100354e35cd57546b5ff556ccd87f415e8db60f7c61178318"} Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.971083 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.971108 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:31:53 crc kubenswrapper[4662]: I1208 09:31:53.979777 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c65c7b9c6-qttgk" podStartSLOduration=2.979731885 podStartE2EDuration="2.979731885s" podCreationTimestamp="2025-12-08 09:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:53.961666347 +0000 UTC m=+1037.530694337" watchObservedRunningTime="2025-12-08 09:31:53.979731885 +0000 UTC m=+1037.548759875" Dec 08 09:31:54 crc kubenswrapper[4662]: I1208 09:31:54.058648 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6775bc75d4-c5zmq" podStartSLOduration=3.058626978 podStartE2EDuration="3.058626978s" podCreationTimestamp="2025-12-08 09:31:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:54.048260988 +0000 UTC m=+1037.617288978" watchObservedRunningTime="2025-12-08 09:31:54.058626978 +0000 UTC m=+1037.627654958" Dec 08 09:31:54 crc kubenswrapper[4662]: I1208 09:31:54.715788 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" path="/var/lib/kubelet/pods/7c76993f-f2fc-49d8-9604-f26007c1091b/volumes" Dec 08 09:31:54 crc kubenswrapper[4662]: I1208 09:31:54.995752 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5d95b89f45-hlj4c"] Dec 08 09:31:54 crc kubenswrapper[4662]: E1208 09:31:54.996150 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerName="init" Dec 08 09:31:54 crc kubenswrapper[4662]: I1208 09:31:54.996161 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerName="init" Dec 08 09:31:54 crc kubenswrapper[4662]: E1208 09:31:54.996172 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerName="dnsmasq-dns" Dec 08 09:31:54 crc kubenswrapper[4662]: I1208 09:31:54.996178 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerName="dnsmasq-dns" Dec 08 09:31:54 crc kubenswrapper[4662]: I1208 09:31:54.996345 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c76993f-f2fc-49d8-9604-f26007c1091b" containerName="dnsmasq-dns" Dec 08 09:31:54 crc kubenswrapper[4662]: I1208 09:31:54.997193 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.001591 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.002064 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.016240 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d95b89f45-hlj4c"] Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.144406 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-internal-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.146637 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-httpd-config\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.147104 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-public-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.147177 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-ovndb-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.147257 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-combined-ca-bundle\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.147544 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dmgb\" (UniqueName: \"kubernetes.io/projected/739c97af-0cc4-4ba9-8707-2d15947dda47-kube-api-access-5dmgb\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.147630 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-config\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.249559 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-config\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.249638 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-internal-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.249727 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-httpd-config\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.249825 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-public-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.249860 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-ovndb-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.249892 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-combined-ca-bundle\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.249984 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dmgb\" (UniqueName: \"kubernetes.io/projected/739c97af-0cc4-4ba9-8707-2d15947dda47-kube-api-access-5dmgb\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.256195 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-combined-ca-bundle\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.259833 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-httpd-config\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.263835 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-public-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.264325 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-config\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.271196 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-ovndb-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.276230 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dmgb\" (UniqueName: \"kubernetes.io/projected/739c97af-0cc4-4ba9-8707-2d15947dda47-kube-api-access-5dmgb\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.278641 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/739c97af-0cc4-4ba9-8707-2d15947dda47-internal-tls-certs\") pod \"neutron-5d95b89f45-hlj4c\" (UID: \"739c97af-0cc4-4ba9-8707-2d15947dda47\") " pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.319528 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.996179 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" event={"ID":"68ba3ed7-3419-4918-bf9e-1ed15b0a2596","Type":"ContainerStarted","Data":"0a1fbe496b26ebc9ed59ac49427a231b67ca9f6f9061b12a5c44656f7512e26a"} Dec 08 09:31:55 crc kubenswrapper[4662]: I1208 09:31:55.998449 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:31:56 crc kubenswrapper[4662]: I1208 09:31:56.025850 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" podStartSLOduration=6.025826334 podStartE2EDuration="6.025826334s" podCreationTimestamp="2025-12-08 09:31:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:31:56.020627085 +0000 UTC m=+1039.589655075" watchObservedRunningTime="2025-12-08 09:31:56.025826334 +0000 UTC m=+1039.594854334" Dec 08 09:31:56 crc kubenswrapper[4662]: I1208 09:31:56.073956 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5d95b89f45-hlj4c"] Dec 08 09:31:59 crc kubenswrapper[4662]: W1208 09:31:59.879833 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739c97af_0cc4_4ba9_8707_2d15947dda47.slice/crio-b63d2bdeabdeb44b5f7600176496f9edccfa452f0e96766eb9b30b466fec13fa WatchSource:0}: Error finding container b63d2bdeabdeb44b5f7600176496f9edccfa452f0e96766eb9b30b466fec13fa: Status 404 returned error can't find the container with id b63d2bdeabdeb44b5f7600176496f9edccfa452f0e96766eb9b30b466fec13fa Dec 08 09:32:00 crc kubenswrapper[4662]: I1208 09:32:00.034052 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d95b89f45-hlj4c" event={"ID":"739c97af-0cc4-4ba9-8707-2d15947dda47","Type":"ContainerStarted","Data":"b63d2bdeabdeb44b5f7600176496f9edccfa452f0e96766eb9b30b466fec13fa"} Dec 08 09:32:00 crc kubenswrapper[4662]: E1208 09:32:00.775394 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="67a7de60-620b-4857-839c-4587bf1cab11" Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.051101 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d95b89f45-hlj4c" event={"ID":"739c97af-0cc4-4ba9-8707-2d15947dda47","Type":"ContainerStarted","Data":"359268c423e91f5ed625b7aee9a0f5cca9d04d338e17b5e974e9bb7a061d2e0a"} Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.051179 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5d95b89f45-hlj4c" event={"ID":"739c97af-0cc4-4ba9-8707-2d15947dda47","Type":"ContainerStarted","Data":"187a0f8c3165422dfd9c607edc71c5437ebc535b80a0c1f0d50a973210341e2d"} Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.051288 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.063504 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="ceilometer-notification-agent" containerID="cri-o://323f90a6f60981a0202cc02719fcb667ada692b6b170982ce3b4d5a9c21fbd70" gracePeriod=30 Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.063832 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerStarted","Data":"02f3b114a2550b1717bb62b3e12aa1a8420046494a1af552497e0c1fb69bb64d"} Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.063877 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.063920 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="proxy-httpd" containerID="cri-o://02f3b114a2550b1717bb62b3e12aa1a8420046494a1af552497e0c1fb69bb64d" gracePeriod=30 Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.063946 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="sg-core" containerID="cri-o://ed9403d5a7fab5224cb239b7c48ac7a47b4bd6fa87a2e6951faa26cdfd6ce8b4" gracePeriod=30 Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.070000 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xd547" event={"ID":"a3cedcd2-d44c-4c45-acc9-384d45424740","Type":"ContainerStarted","Data":"7b91fb0d88656350ea391f276eda282c32b39f73188722fb29daddc51bb83697"} Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.104874 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5d95b89f45-hlj4c" podStartSLOduration=7.1048563829999996 podStartE2EDuration="7.104856383s" podCreationTimestamp="2025-12-08 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:01.079118394 +0000 UTC m=+1044.648146404" watchObservedRunningTime="2025-12-08 09:32:01.104856383 +0000 UTC m=+1044.673884373" Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.119113 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-xd547" podStartSLOduration=7.581890301 podStartE2EDuration="52.119092584s" podCreationTimestamp="2025-12-08 09:31:09 +0000 UTC" firstStartedPulling="2025-12-08 09:31:11.435855158 +0000 UTC m=+995.004883148" lastFinishedPulling="2025-12-08 09:31:55.973057441 +0000 UTC m=+1039.542085431" observedRunningTime="2025-12-08 09:32:01.116275089 +0000 UTC m=+1044.685303089" watchObservedRunningTime="2025-12-08 09:32:01.119092584 +0000 UTC m=+1044.688120564" Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.383882 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.452918 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-qcgpk"] Dec 08 09:32:01 crc kubenswrapper[4662]: I1208 09:32:01.453215 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" podUID="b11e546d-17fd-4aae-a904-879bf7264818" containerName="dnsmasq-dns" containerID="cri-o://548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838" gracePeriod=10 Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.051500 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.072342 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-nb\") pod \"b11e546d-17fd-4aae-a904-879bf7264818\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.072418 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-config\") pod \"b11e546d-17fd-4aae-a904-879bf7264818\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.072454 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnzl6\" (UniqueName: \"kubernetes.io/projected/b11e546d-17fd-4aae-a904-879bf7264818-kube-api-access-pnzl6\") pod \"b11e546d-17fd-4aae-a904-879bf7264818\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.072551 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-dns-svc\") pod \"b11e546d-17fd-4aae-a904-879bf7264818\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.072593 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-sb\") pod \"b11e546d-17fd-4aae-a904-879bf7264818\" (UID: \"b11e546d-17fd-4aae-a904-879bf7264818\") " Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.089626 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11e546d-17fd-4aae-a904-879bf7264818-kube-api-access-pnzl6" (OuterVolumeSpecName: "kube-api-access-pnzl6") pod "b11e546d-17fd-4aae-a904-879bf7264818" (UID: "b11e546d-17fd-4aae-a904-879bf7264818"). InnerVolumeSpecName "kube-api-access-pnzl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.097613 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z4wmv" event={"ID":"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc","Type":"ContainerStarted","Data":"18d70e7d42db0da9422e825c48afe254259ace2db7fd040215b85113b978379b"} Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.124566 4662 generic.go:334] "Generic (PLEG): container finished" podID="b11e546d-17fd-4aae-a904-879bf7264818" containerID="548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838" exitCode=0 Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.124626 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" event={"ID":"b11e546d-17fd-4aae-a904-879bf7264818","Type":"ContainerDied","Data":"548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838"} Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.124653 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" event={"ID":"b11e546d-17fd-4aae-a904-879bf7264818","Type":"ContainerDied","Data":"6f40b02cc611bfbd5cc86fa15950236fb65a837401e3934fef86186a70aef775"} Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.124669 4662 scope.go:117] "RemoveContainer" containerID="548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.124823 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bf59f66bf-qcgpk" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.140259 4662 generic.go:334] "Generic (PLEG): container finished" podID="67a7de60-620b-4857-839c-4587bf1cab11" containerID="02f3b114a2550b1717bb62b3e12aa1a8420046494a1af552497e0c1fb69bb64d" exitCode=0 Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.140501 4662 generic.go:334] "Generic (PLEG): container finished" podID="67a7de60-620b-4857-839c-4587bf1cab11" containerID="ed9403d5a7fab5224cb239b7c48ac7a47b4bd6fa87a2e6951faa26cdfd6ce8b4" exitCode=2 Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.141476 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerDied","Data":"02f3b114a2550b1717bb62b3e12aa1a8420046494a1af552497e0c1fb69bb64d"} Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.141584 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerDied","Data":"ed9403d5a7fab5224cb239b7c48ac7a47b4bd6fa87a2e6951faa26cdfd6ce8b4"} Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.210182 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnzl6\" (UniqueName: \"kubernetes.io/projected/b11e546d-17fd-4aae-a904-879bf7264818-kube-api-access-pnzl6\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.251103 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b11e546d-17fd-4aae-a904-879bf7264818" (UID: "b11e546d-17fd-4aae-a904-879bf7264818"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.270272 4662 scope.go:117] "RemoveContainer" containerID="17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.291993 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b11e546d-17fd-4aae-a904-879bf7264818" (UID: "b11e546d-17fd-4aae-a904-879bf7264818"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.302402 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b11e546d-17fd-4aae-a904-879bf7264818" (UID: "b11e546d-17fd-4aae-a904-879bf7264818"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.316024 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.316061 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.316074 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.325632 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-config" (OuterVolumeSpecName: "config") pod "b11e546d-17fd-4aae-a904-879bf7264818" (UID: "b11e546d-17fd-4aae-a904-879bf7264818"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.339931 4662 scope.go:117] "RemoveContainer" containerID="548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838" Dec 08 09:32:02 crc kubenswrapper[4662]: E1208 09:32:02.340549 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838\": container with ID starting with 548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838 not found: ID does not exist" containerID="548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.340583 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838"} err="failed to get container status \"548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838\": rpc error: code = NotFound desc = could not find container \"548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838\": container with ID starting with 548181b943e3df655e092e947ba9edd1bc55fb88395c97ce2b1e1eec8bd0c838 not found: ID does not exist" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.340608 4662 scope.go:117] "RemoveContainer" containerID="17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133" Dec 08 09:32:02 crc kubenswrapper[4662]: E1208 09:32:02.340985 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133\": container with ID starting with 17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133 not found: ID does not exist" containerID="17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.341018 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133"} err="failed to get container status \"17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133\": rpc error: code = NotFound desc = could not find container \"17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133\": container with ID starting with 17ef6d4f4608de5ead63b767c3cd1056a29fa0774b518836d8b8fe4fc01bb133 not found: ID does not exist" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.418300 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b11e546d-17fd-4aae-a904-879bf7264818-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.470221 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-z4wmv" podStartSLOduration=4.099247901 podStartE2EDuration="53.470185934s" podCreationTimestamp="2025-12-08 09:31:09 +0000 UTC" firstStartedPulling="2025-12-08 09:31:11.198571734 +0000 UTC m=+994.767599724" lastFinishedPulling="2025-12-08 09:32:00.569509757 +0000 UTC m=+1044.138537757" observedRunningTime="2025-12-08 09:32:02.141733028 +0000 UTC m=+1045.710761018" watchObservedRunningTime="2025-12-08 09:32:02.470185934 +0000 UTC m=+1046.039213924" Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.480819 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-qcgpk"] Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.484206 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bf59f66bf-qcgpk"] Dec 08 09:32:02 crc kubenswrapper[4662]: I1208 09:32:02.708760 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11e546d-17fd-4aae-a904-879bf7264818" path="/var/lib/kubelet/pods/b11e546d-17fd-4aae-a904-879bf7264818/volumes" Dec 08 09:32:04 crc kubenswrapper[4662]: I1208 09:32:04.177525 4662 generic.go:334] "Generic (PLEG): container finished" podID="a3cedcd2-d44c-4c45-acc9-384d45424740" containerID="7b91fb0d88656350ea391f276eda282c32b39f73188722fb29daddc51bb83697" exitCode=0 Dec 08 09:32:04 crc kubenswrapper[4662]: I1208 09:32:04.177595 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xd547" event={"ID":"a3cedcd2-d44c-4c45-acc9-384d45424740","Type":"ContainerDied","Data":"7b91fb0d88656350ea391f276eda282c32b39f73188722fb29daddc51bb83697"} Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.217864 4662 generic.go:334] "Generic (PLEG): container finished" podID="67a7de60-620b-4857-839c-4587bf1cab11" containerID="323f90a6f60981a0202cc02719fcb667ada692b6b170982ce3b4d5a9c21fbd70" exitCode=0 Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.218252 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerDied","Data":"323f90a6f60981a0202cc02719fcb667ada692b6b170982ce3b4d5a9c21fbd70"} Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.395139 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.562330 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-scripts\") pod \"67a7de60-620b-4857-839c-4587bf1cab11\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.562780 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-log-httpd\") pod \"67a7de60-620b-4857-839c-4587bf1cab11\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.562881 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-sg-core-conf-yaml\") pod \"67a7de60-620b-4857-839c-4587bf1cab11\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.562964 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v9c4\" (UniqueName: \"kubernetes.io/projected/67a7de60-620b-4857-839c-4587bf1cab11-kube-api-access-7v9c4\") pod \"67a7de60-620b-4857-839c-4587bf1cab11\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.563053 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-combined-ca-bundle\") pod \"67a7de60-620b-4857-839c-4587bf1cab11\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.563123 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-config-data\") pod \"67a7de60-620b-4857-839c-4587bf1cab11\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.563219 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-run-httpd\") pod \"67a7de60-620b-4857-839c-4587bf1cab11\" (UID: \"67a7de60-620b-4857-839c-4587bf1cab11\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.563756 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "67a7de60-620b-4857-839c-4587bf1cab11" (UID: "67a7de60-620b-4857-839c-4587bf1cab11"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.563837 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "67a7de60-620b-4857-839c-4587bf1cab11" (UID: "67a7de60-620b-4857-839c-4587bf1cab11"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.568233 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67a7de60-620b-4857-839c-4587bf1cab11-kube-api-access-7v9c4" (OuterVolumeSpecName: "kube-api-access-7v9c4") pod "67a7de60-620b-4857-839c-4587bf1cab11" (UID: "67a7de60-620b-4857-839c-4587bf1cab11"). InnerVolumeSpecName "kube-api-access-7v9c4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.568657 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-scripts" (OuterVolumeSpecName: "scripts") pod "67a7de60-620b-4857-839c-4587bf1cab11" (UID: "67a7de60-620b-4857-839c-4587bf1cab11"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.581904 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xd547" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.591805 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "67a7de60-620b-4857-839c-4587bf1cab11" (UID: "67a7de60-620b-4857-839c-4587bf1cab11"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.624439 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67a7de60-620b-4857-839c-4587bf1cab11" (UID: "67a7de60-620b-4857-839c-4587bf1cab11"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.648431 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-config-data" (OuterVolumeSpecName: "config-data") pod "67a7de60-620b-4857-839c-4587bf1cab11" (UID: "67a7de60-620b-4857-839c-4587bf1cab11"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.664378 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-db-sync-config-data\") pod \"a3cedcd2-d44c-4c45-acc9-384d45424740\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.664601 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6m2g\" (UniqueName: \"kubernetes.io/projected/a3cedcd2-d44c-4c45-acc9-384d45424740-kube-api-access-f6m2g\") pod \"a3cedcd2-d44c-4c45-acc9-384d45424740\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.664704 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-combined-ca-bundle\") pod \"a3cedcd2-d44c-4c45-acc9-384d45424740\" (UID: \"a3cedcd2-d44c-4c45-acc9-384d45424740\") " Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.665041 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.665109 4662 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.665217 4662 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.665299 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v9c4\" (UniqueName: \"kubernetes.io/projected/67a7de60-620b-4857-839c-4587bf1cab11-kube-api-access-7v9c4\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.665371 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.665438 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/67a7de60-620b-4857-839c-4587bf1cab11-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.665694 4662 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/67a7de60-620b-4857-839c-4587bf1cab11-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.668009 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3cedcd2-d44c-4c45-acc9-384d45424740-kube-api-access-f6m2g" (OuterVolumeSpecName: "kube-api-access-f6m2g") pod "a3cedcd2-d44c-4c45-acc9-384d45424740" (UID: "a3cedcd2-d44c-4c45-acc9-384d45424740"). InnerVolumeSpecName "kube-api-access-f6m2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.668772 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a3cedcd2-d44c-4c45-acc9-384d45424740" (UID: "a3cedcd2-d44c-4c45-acc9-384d45424740"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.688789 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3cedcd2-d44c-4c45-acc9-384d45424740" (UID: "a3cedcd2-d44c-4c45-acc9-384d45424740"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.768616 4662 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.768687 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6m2g\" (UniqueName: \"kubernetes.io/projected/a3cedcd2-d44c-4c45-acc9-384d45424740-kube-api-access-f6m2g\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:05 crc kubenswrapper[4662]: I1208 09:32:05.768702 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3cedcd2-d44c-4c45-acc9-384d45424740-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.237139 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"67a7de60-620b-4857-839c-4587bf1cab11","Type":"ContainerDied","Data":"d4446418dceb37c9d390b251e5fc1e1c0b5bb1abfd298fbcec3a726d9172d8fa"} Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.237193 4662 scope.go:117] "RemoveContainer" containerID="02f3b114a2550b1717bb62b3e12aa1a8420046494a1af552497e0c1fb69bb64d" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.237337 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.251544 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-xd547" event={"ID":"a3cedcd2-d44c-4c45-acc9-384d45424740","Type":"ContainerDied","Data":"edd45a678288b4a3c43f1da8ccd0cf7785ba5700529d0d362f838c27c74b00da"} Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.251587 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edd45a678288b4a3c43f1da8ccd0cf7785ba5700529d0d362f838c27c74b00da" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.251660 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-xd547" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.298696 4662 scope.go:117] "RemoveContainer" containerID="ed9403d5a7fab5224cb239b7c48ac7a47b4bd6fa87a2e6951faa26cdfd6ce8b4" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.357843 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.402550 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.403654 4662 scope.go:117] "RemoveContainer" containerID="323f90a6f60981a0202cc02719fcb667ada692b6b170982ce3b4d5a9c21fbd70" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.440799 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:06 crc kubenswrapper[4662]: E1208 09:32:06.441287 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11e546d-17fd-4aae-a904-879bf7264818" containerName="init" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441308 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11e546d-17fd-4aae-a904-879bf7264818" containerName="init" Dec 08 09:32:06 crc kubenswrapper[4662]: E1208 09:32:06.441329 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="sg-core" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441338 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="sg-core" Dec 08 09:32:06 crc kubenswrapper[4662]: E1208 09:32:06.441358 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="ceilometer-notification-agent" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441365 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="ceilometer-notification-agent" Dec 08 09:32:06 crc kubenswrapper[4662]: E1208 09:32:06.441386 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b11e546d-17fd-4aae-a904-879bf7264818" containerName="dnsmasq-dns" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441393 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="b11e546d-17fd-4aae-a904-879bf7264818" containerName="dnsmasq-dns" Dec 08 09:32:06 crc kubenswrapper[4662]: E1208 09:32:06.441402 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3cedcd2-d44c-4c45-acc9-384d45424740" containerName="barbican-db-sync" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441409 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3cedcd2-d44c-4c45-acc9-384d45424740" containerName="barbican-db-sync" Dec 08 09:32:06 crc kubenswrapper[4662]: E1208 09:32:06.441424 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="proxy-httpd" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441431 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="proxy-httpd" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441628 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="proxy-httpd" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441667 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="b11e546d-17fd-4aae-a904-879bf7264818" containerName="dnsmasq-dns" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441682 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="sg-core" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441700 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3cedcd2-d44c-4c45-acc9-384d45424740" containerName="barbican-db-sync" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.441714 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="67a7de60-620b-4857-839c-4587bf1cab11" containerName="ceilometer-notification-agent" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.443600 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.452412 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.452683 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.455510 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.565700 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-d594947cf-hhppl"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.567579 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.578353 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.579110 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.579382 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-47t8j" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.583235 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d594947cf-hhppl"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586414 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586469 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-config-data\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586505 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n22lv\" (UniqueName: \"kubernetes.io/projected/36f4c1e2-7f91-48fd-9258-d560df73bb4a-kube-api-access-n22lv\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586526 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc6wn\" (UniqueName: \"kubernetes.io/projected/ae70464a-32e1-41cc-b173-85019c99d562-kube-api-access-jc6wn\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586561 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-log-httpd\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586610 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-run-httpd\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586630 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-combined-ca-bundle\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586646 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-config-data\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586663 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-config-data-custom\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586679 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586697 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-scripts\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.586716 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f4c1e2-7f91-48fd-9258-d560df73bb4a-logs\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.596620 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-76455569b6-zfxpp"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.599072 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.612640 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.642178 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76455569b6-zfxpp"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.687687 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-run-httpd\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.688549 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-combined-ca-bundle\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.688736 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-config-data\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.688846 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.688912 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-config-data-custom\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.688988 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-scripts\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.689080 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f4c1e2-7f91-48fd-9258-d560df73bb4a-logs\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.689180 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6v9d\" (UniqueName: \"kubernetes.io/projected/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-kube-api-access-x6v9d\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.690183 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.690310 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-config-data\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.690423 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n22lv\" (UniqueName: \"kubernetes.io/projected/36f4c1e2-7f91-48fd-9258-d560df73bb4a-kube-api-access-n22lv\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.690534 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc6wn\" (UniqueName: \"kubernetes.io/projected/ae70464a-32e1-41cc-b173-85019c99d562-kube-api-access-jc6wn\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.690613 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-config-data\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.690699 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-combined-ca-bundle\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.689647 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-run-httpd\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.690966 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-logs\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.691171 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-config-data-custom\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.691250 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-log-httpd\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.691686 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-log-httpd\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.695429 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-combined-ca-bundle\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.696073 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36f4c1e2-7f91-48fd-9258-d560df73bb4a-logs\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.711526 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.716125 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67a7de60-620b-4857-839c-4587bf1cab11" path="/var/lib/kubelet/pods/67a7de60-620b-4857-839c-4587bf1cab11/volumes" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.716834 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-869f779d85-xsh55"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.718500 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.723948 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-config-data\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.737539 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-config-data-custom\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.750520 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-scripts\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.750786 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36f4c1e2-7f91-48fd-9258-d560df73bb4a-config-data\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.750912 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.756934 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc6wn\" (UniqueName: \"kubernetes.io/projected/ae70464a-32e1-41cc-b173-85019c99d562-kube-api-access-jc6wn\") pod \"ceilometer-0\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.757650 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n22lv\" (UniqueName: \"kubernetes.io/projected/36f4c1e2-7f91-48fd-9258-d560df73bb4a-kube-api-access-n22lv\") pod \"barbican-worker-d594947cf-hhppl\" (UID: \"36f4c1e2-7f91-48fd-9258-d560df73bb4a\") " pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.764547 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-xsh55"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.773899 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.792716 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-config-data\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.792777 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-combined-ca-bundle\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.792797 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-logs\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.792815 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-config-data-custom\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.792910 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x6v9d\" (UniqueName: \"kubernetes.io/projected/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-kube-api-access-x6v9d\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.797258 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-logs\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.803588 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-config-data\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.805386 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-config-data-custom\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.812855 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-combined-ca-bundle\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.833644 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6v9d\" (UniqueName: \"kubernetes.io/projected/0cdcfce4-297a-4fd9-8854-2b3bd51fc592-kube-api-access-x6v9d\") pod \"barbican-keystone-listener-76455569b6-zfxpp\" (UID: \"0cdcfce4-297a-4fd9-8854-2b3bd51fc592\") " pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.876810 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6fd78cf54-pjcw8"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.884359 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.891049 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.900803 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-dns-svc\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.900865 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vv9z\" (UniqueName: \"kubernetes.io/projected/260b0772-e118-4134-a015-6e0d7180f166-kube-api-access-6vv9z\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.900889 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-config\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.900916 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.900965 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.909632 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fd78cf54-pjcw8"] Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.917254 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-d594947cf-hhppl" Dec 08 09:32:06 crc kubenswrapper[4662]: I1208 09:32:06.942224 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002594 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttmc4\" (UniqueName: \"kubernetes.io/projected/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-kube-api-access-ttmc4\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002645 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002666 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-combined-ca-bundle\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002715 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-logs\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002778 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-dns-svc\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002797 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002828 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vv9z\" (UniqueName: \"kubernetes.io/projected/260b0772-e118-4134-a015-6e0d7180f166-kube-api-access-6vv9z\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002846 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-config\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002866 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data-custom\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.002889 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.003612 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-nb\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.004131 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-sb\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.004639 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-dns-svc\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.005783 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-config\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.023816 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vv9z\" (UniqueName: \"kubernetes.io/projected/260b0772-e118-4134-a015-6e0d7180f166-kube-api-access-6vv9z\") pod \"dnsmasq-dns-869f779d85-xsh55\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.104221 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ttmc4\" (UniqueName: \"kubernetes.io/projected/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-kube-api-access-ttmc4\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.104273 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-combined-ca-bundle\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.104342 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-logs\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.104412 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.104459 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data-custom\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.105260 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-logs\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.109760 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data-custom\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.109900 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-combined-ca-bundle\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.111220 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.122268 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttmc4\" (UniqueName: \"kubernetes.io/projected/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-kube-api-access-ttmc4\") pod \"barbican-api-6fd78cf54-pjcw8\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.183205 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.234386 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.274420 4662 generic.go:334] "Generic (PLEG): container finished" podID="74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" containerID="18d70e7d42db0da9422e825c48afe254259ace2db7fd040215b85113b978379b" exitCode=0 Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.274500 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z4wmv" event={"ID":"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc","Type":"ContainerDied","Data":"18d70e7d42db0da9422e825c48afe254259ace2db7fd040215b85113b978379b"} Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.389484 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.555885 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-d594947cf-hhppl"] Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.583218 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-76455569b6-zfxpp"] Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.879494 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-xsh55"] Dec 08 09:32:07 crc kubenswrapper[4662]: I1208 09:32:07.889045 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6fd78cf54-pjcw8"] Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.314873 4662 generic.go:334] "Generic (PLEG): container finished" podID="260b0772-e118-4134-a015-6e0d7180f166" containerID="3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8" exitCode=0 Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.315194 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-xsh55" event={"ID":"260b0772-e118-4134-a015-6e0d7180f166","Type":"ContainerDied","Data":"3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8"} Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.315283 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-xsh55" event={"ID":"260b0772-e118-4134-a015-6e0d7180f166","Type":"ContainerStarted","Data":"00a1c2782aea058382b2f174c3140071095d417bb44b89dc5741175aca070a75"} Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.322333 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd78cf54-pjcw8" event={"ID":"2fa22efb-c102-4ef9-8176-ccaa4b7563c5","Type":"ContainerStarted","Data":"2ef96a4ea6409f3f1df47354f6ea661afdad58a59076951524ac5784e4147712"} Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.322366 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd78cf54-pjcw8" event={"ID":"2fa22efb-c102-4ef9-8176-ccaa4b7563c5","Type":"ContainerStarted","Data":"0dc0cf85df391988ed25141b43eb2f24eebff2ac0f4a6308468d7866861526fa"} Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.326321 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" event={"ID":"0cdcfce4-297a-4fd9-8854-2b3bd51fc592","Type":"ContainerStarted","Data":"21d3e24cacf1dfb6e0bcb2c39d453cb6bc55b06e94af1221d8dbf613839b0890"} Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.334393 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d594947cf-hhppl" event={"ID":"36f4c1e2-7f91-48fd-9258-d560df73bb4a","Type":"ContainerStarted","Data":"7ad9aa283121ca84a9937a4053a3cec194019ff3c2db226b81ec3be9373ca2be"} Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.342112 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerStarted","Data":"5afb20f95751e1c9d7291d73fa029970d72585120405dc11035276cf16221589"} Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.725092 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.751858 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-db-sync-config-data\") pod \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.751901 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pn2dz\" (UniqueName: \"kubernetes.io/projected/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-kube-api-access-pn2dz\") pod \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.751943 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-combined-ca-bundle\") pod \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.752036 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-config-data\") pod \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.752074 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-etc-machine-id\") pod \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.752096 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-scripts\") pod \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\" (UID: \"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc\") " Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.752647 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" (UID: "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.765011 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-kube-api-access-pn2dz" (OuterVolumeSpecName: "kube-api-access-pn2dz") pod "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" (UID: "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc"). InnerVolumeSpecName "kube-api-access-pn2dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.766002 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" (UID: "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.767209 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-scripts" (OuterVolumeSpecName: "scripts") pod "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" (UID: "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.788184 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" (UID: "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.803240 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-config-data" (OuterVolumeSpecName: "config-data") pod "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" (UID: "74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.854627 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.854662 4662 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.854671 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.854679 4662 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.854687 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pn2dz\" (UniqueName: \"kubernetes.io/projected/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-kube-api-access-pn2dz\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:08 crc kubenswrapper[4662]: I1208 09:32:08.854697 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.353315 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerStarted","Data":"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a"} Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.354802 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerStarted","Data":"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb"} Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.357011 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-z4wmv" event={"ID":"74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc","Type":"ContainerDied","Data":"4756f582ca2664d59b65666b90aabd23f666802fcee1bcdb69a56eaf9c0f7e72"} Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.357132 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4756f582ca2664d59b65666b90aabd23f666802fcee1bcdb69a56eaf9c0f7e72" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.357196 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-z4wmv" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.364155 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-xsh55" event={"ID":"260b0772-e118-4134-a015-6e0d7180f166","Type":"ContainerStarted","Data":"a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948"} Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.364510 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.370569 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd78cf54-pjcw8" event={"ID":"2fa22efb-c102-4ef9-8176-ccaa4b7563c5","Type":"ContainerStarted","Data":"f26c40d7b850fc3379360256964fb99f0ff0b14e58189fc610511c56c6d61b09"} Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.370865 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.370944 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.392157 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-869f779d85-xsh55" podStartSLOduration=3.392131492 podStartE2EDuration="3.392131492s" podCreationTimestamp="2025-12-08 09:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:09.382653428 +0000 UTC m=+1052.951681418" watchObservedRunningTime="2025-12-08 09:32:09.392131492 +0000 UTC m=+1052.961159492" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.419178 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6fd78cf54-pjcw8" podStartSLOduration=3.419155986 podStartE2EDuration="3.419155986s" podCreationTimestamp="2025-12-08 09:32:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:09.406429195 +0000 UTC m=+1052.975457185" watchObservedRunningTime="2025-12-08 09:32:09.419155986 +0000 UTC m=+1052.988183976" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.596034 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-6c965fbb88-wxgll"] Dec 08 09:32:09 crc kubenswrapper[4662]: E1208 09:32:09.596395 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" containerName="cinder-db-sync" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.596412 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" containerName="cinder-db-sync" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.596603 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" containerName="cinder-db-sync" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.597452 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.604408 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.604528 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.616680 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c965fbb88-wxgll"] Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.669219 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-internal-tls-certs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.669292 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a9d-af07-430b-985e-64a6077d6267-logs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.669330 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-public-tls-certs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.669354 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-config-data\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.669374 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs4v9\" (UniqueName: \"kubernetes.io/projected/0ad91a9d-af07-430b-985e-64a6077d6267-kube-api-access-qs4v9\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.669421 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-combined-ca-bundle\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.669444 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-config-data-custom\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.715679 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.717225 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.732961 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.733282 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.733364 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-7cjqq" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.733445 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.759635 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.771720 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a9d-af07-430b-985e-64a6077d6267-logs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.771969 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772052 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-scripts\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772204 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772325 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-public-tls-certs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772405 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-config-data\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772481 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs4v9\" (UniqueName: \"kubernetes.io/projected/0ad91a9d-af07-430b-985e-64a6077d6267-kube-api-access-qs4v9\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772583 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-combined-ca-bundle\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772658 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-config-data-custom\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772729 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-752xp\" (UniqueName: \"kubernetes.io/projected/79572439-2bff-4b4f-8208-f9ae14e64f9b-kube-api-access-752xp\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772854 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79572439-2bff-4b4f-8208-f9ae14e64f9b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.772936 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.773014 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-internal-tls-certs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.773151 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0ad91a9d-af07-430b-985e-64a6077d6267-logs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.787495 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-internal-tls-certs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.804424 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-config-data-custom\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.804956 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-public-tls-certs\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.812183 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-config-data\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.820102 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0ad91a9d-af07-430b-985e-64a6077d6267-combined-ca-bundle\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.842337 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs4v9\" (UniqueName: \"kubernetes.io/projected/0ad91a9d-af07-430b-985e-64a6077d6267-kube-api-access-qs4v9\") pod \"barbican-api-6c965fbb88-wxgll\" (UID: \"0ad91a9d-af07-430b-985e-64a6077d6267\") " pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.881188 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.881281 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.881303 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-scripts\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.881324 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.881404 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-752xp\" (UniqueName: \"kubernetes.io/projected/79572439-2bff-4b4f-8208-f9ae14e64f9b-kube-api-access-752xp\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.881431 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79572439-2bff-4b4f-8208-f9ae14e64f9b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.881509 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79572439-2bff-4b4f-8208-f9ae14e64f9b-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.898778 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.906602 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-scripts\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.907162 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-xsh55"] Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.920485 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.934042 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.954924 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-752xp\" (UniqueName: \"kubernetes.io/projected/79572439-2bff-4b4f-8208-f9ae14e64f9b-kube-api-access-752xp\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.986731 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:09 crc kubenswrapper[4662]: I1208 09:32:09.989360 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.002813 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data\") pod \"cinder-scheduler-0\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.010121 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.057696 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.079609 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.087716 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.087828 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl46z\" (UniqueName: \"kubernetes.io/projected/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-kube-api-access-gl46z\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.087849 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.087865 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-scripts\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.087905 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.087938 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-logs\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.087975 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.110002 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-4zljc"] Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.111612 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.148153 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-4zljc"] Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190704 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gl46z\" (UniqueName: \"kubernetes.io/projected/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-kube-api-access-gl46z\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190782 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190808 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-scripts\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190852 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5bvz\" (UniqueName: \"kubernetes.io/projected/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-kube-api-access-d5bvz\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190878 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190911 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190943 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-logs\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.190984 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.191003 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.191021 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.191056 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-config\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.191086 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.194169 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-logs\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.195897 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.197842 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-scripts\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.226447 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.234444 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gl46z\" (UniqueName: \"kubernetes.io/projected/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-kube-api-access-gl46z\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.234525 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data-custom\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.253154 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data\") pod \"cinder-api-0\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.292948 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.293009 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-config\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.293041 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.293075 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5bvz\" (UniqueName: \"kubernetes.io/projected/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-kube-api-access-d5bvz\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.293093 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.293966 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-sb\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.294563 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-config\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.294719 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-nb\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.295102 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-dns-svc\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.354555 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5bvz\" (UniqueName: \"kubernetes.io/projected/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-kube-api-access-d5bvz\") pod \"dnsmasq-dns-58db5546cc-4zljc\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.384562 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 08 09:32:10 crc kubenswrapper[4662]: I1208 09:32:10.458731 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:11 crc kubenswrapper[4662]: I1208 09:32:11.410718 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-869f779d85-xsh55" podUID="260b0772-e118-4134-a015-6e0d7180f166" containerName="dnsmasq-dns" containerID="cri-o://a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948" gracePeriod=10 Dec 08 09:32:11 crc kubenswrapper[4662]: I1208 09:32:11.617976 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-4zljc"] Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:11.998009 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.024662 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:12 crc kubenswrapper[4662]: W1208 09:32:12.042486 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79572439_2bff_4b4f_8208_f9ae14e64f9b.slice/crio-9a9757a7b4a7aa742a116bc16993fbcf3b301d9ad431df5dc1588012510c4c65 WatchSource:0}: Error finding container 9a9757a7b4a7aa742a116bc16993fbcf3b301d9ad431df5dc1588012510c4c65: Status 404 returned error can't find the container with id 9a9757a7b4a7aa742a116bc16993fbcf3b301d9ad431df5dc1588012510c4c65 Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.063580 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-6c965fbb88-wxgll"] Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.183073 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.267533 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-nb\") pod \"260b0772-e118-4134-a015-6e0d7180f166\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.267601 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-sb\") pod \"260b0772-e118-4134-a015-6e0d7180f166\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.267623 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-dns-svc\") pod \"260b0772-e118-4134-a015-6e0d7180f166\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.267697 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vv9z\" (UniqueName: \"kubernetes.io/projected/260b0772-e118-4134-a015-6e0d7180f166-kube-api-access-6vv9z\") pod \"260b0772-e118-4134-a015-6e0d7180f166\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.267720 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-config\") pod \"260b0772-e118-4134-a015-6e0d7180f166\" (UID: \"260b0772-e118-4134-a015-6e0d7180f166\") " Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.302018 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/260b0772-e118-4134-a015-6e0d7180f166-kube-api-access-6vv9z" (OuterVolumeSpecName: "kube-api-access-6vv9z") pod "260b0772-e118-4134-a015-6e0d7180f166" (UID: "260b0772-e118-4134-a015-6e0d7180f166"). InnerVolumeSpecName "kube-api-access-6vv9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.345396 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "260b0772-e118-4134-a015-6e0d7180f166" (UID: "260b0772-e118-4134-a015-6e0d7180f166"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.349367 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-config" (OuterVolumeSpecName: "config") pod "260b0772-e118-4134-a015-6e0d7180f166" (UID: "260b0772-e118-4134-a015-6e0d7180f166"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.370329 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vv9z\" (UniqueName: \"kubernetes.io/projected/260b0772-e118-4134-a015-6e0d7180f166-kube-api-access-6vv9z\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.370370 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.370382 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.411383 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "260b0772-e118-4134-a015-6e0d7180f166" (UID: "260b0772-e118-4134-a015-6e0d7180f166"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.420163 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "260b0772-e118-4134-a015-6e0d7180f166" (UID: "260b0772-e118-4134-a015-6e0d7180f166"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.448603 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" event={"ID":"0cdcfce4-297a-4fd9-8854-2b3bd51fc592","Type":"ContainerStarted","Data":"54cdeaec95409a090a3651b8dac02404906251738d3a412af0b9928c7d07cde5"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.448644 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" event={"ID":"0cdcfce4-297a-4fd9-8854-2b3bd51fc592","Type":"ContainerStarted","Data":"1ec905e6ffd7fe7cb4f5d4260baba9cfc1c4638b05af9df0dfb8be9aa6dd6719"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.460382 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a4d70cc0-8e18-42c6-842b-25ba8229f4e2","Type":"ContainerStarted","Data":"b9f6552a33ec02def9e7816806db6a44fdb04e0946f85aa0c04f1f597899827d"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.474097 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.474156 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/260b0772-e118-4134-a015-6e0d7180f166-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.480348 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-76455569b6-zfxpp" podStartSLOduration=3.122379028 podStartE2EDuration="6.480332869s" podCreationTimestamp="2025-12-08 09:32:06 +0000 UTC" firstStartedPulling="2025-12-08 09:32:07.588614177 +0000 UTC m=+1051.157642167" lastFinishedPulling="2025-12-08 09:32:10.946568018 +0000 UTC m=+1054.515596008" observedRunningTime="2025-12-08 09:32:12.470438314 +0000 UTC m=+1056.039466304" watchObservedRunningTime="2025-12-08 09:32:12.480332869 +0000 UTC m=+1056.049360859" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.485213 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d594947cf-hhppl" event={"ID":"36f4c1e2-7f91-48fd-9258-d560df73bb4a","Type":"ContainerStarted","Data":"11c6557891a6b32799df8e7f0dfcae7330142f3fba355bffa647f6edce3623c8"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.485260 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-d594947cf-hhppl" event={"ID":"36f4c1e2-7f91-48fd-9258-d560df73bb4a","Type":"ContainerStarted","Data":"0461f5bec997bc63e019fcb32cd7a053c5cd63601f7ed17ff096efa6ef90de56"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.509045 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79572439-2bff-4b4f-8208-f9ae14e64f9b","Type":"ContainerStarted","Data":"9a9757a7b4a7aa742a116bc16993fbcf3b301d9ad431df5dc1588012510c4c65"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.526277 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerStarted","Data":"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.532487 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-d594947cf-hhppl" podStartSLOduration=3.191423918 podStartE2EDuration="6.532471895s" podCreationTimestamp="2025-12-08 09:32:06 +0000 UTC" firstStartedPulling="2025-12-08 09:32:07.582765501 +0000 UTC m=+1051.151793491" lastFinishedPulling="2025-12-08 09:32:10.923813478 +0000 UTC m=+1054.492841468" observedRunningTime="2025-12-08 09:32:12.532398793 +0000 UTC m=+1056.101426783" watchObservedRunningTime="2025-12-08 09:32:12.532471895 +0000 UTC m=+1056.101499885" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.536539 4662 generic.go:334] "Generic (PLEG): container finished" podID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerID="ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb" exitCode=0 Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.537238 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" event={"ID":"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5","Type":"ContainerDied","Data":"ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.537730 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" event={"ID":"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5","Type":"ContainerStarted","Data":"7e1ef37f2869a366597570bfbc1d7814598356d2984429c56817a741d3a4e3df"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.543291 4662 generic.go:334] "Generic (PLEG): container finished" podID="260b0772-e118-4134-a015-6e0d7180f166" containerID="a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948" exitCode=0 Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.543373 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-xsh55" event={"ID":"260b0772-e118-4134-a015-6e0d7180f166","Type":"ContainerDied","Data":"a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.543402 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-869f779d85-xsh55" event={"ID":"260b0772-e118-4134-a015-6e0d7180f166","Type":"ContainerDied","Data":"00a1c2782aea058382b2f174c3140071095d417bb44b89dc5741175aca070a75"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.543420 4662 scope.go:117] "RemoveContainer" containerID="a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.543514 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-869f779d85-xsh55" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.559718 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c965fbb88-wxgll" event={"ID":"0ad91a9d-af07-430b-985e-64a6077d6267","Type":"ContainerStarted","Data":"b9122dc0218f6ac2e3167349f6fd64938c5944293c0eaf93b04728db73d94505"} Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.630976 4662 scope.go:117] "RemoveContainer" containerID="3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.759554 4662 scope.go:117] "RemoveContainer" containerID="a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.774753 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-xsh55"] Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.776268 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-869f779d85-xsh55"] Dec 08 09:32:12 crc kubenswrapper[4662]: E1208 09:32:12.776443 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948\": container with ID starting with a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948 not found: ID does not exist" containerID="a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.777095 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948"} err="failed to get container status \"a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948\": rpc error: code = NotFound desc = could not find container \"a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948\": container with ID starting with a5bb3e861da2864dcb3447f5856541206637b7464daf34863e2713ad7e459948 not found: ID does not exist" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.777187 4662 scope.go:117] "RemoveContainer" containerID="3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8" Dec 08 09:32:12 crc kubenswrapper[4662]: E1208 09:32:12.780589 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8\": container with ID starting with 3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8 not found: ID does not exist" containerID="3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8" Dec 08 09:32:12 crc kubenswrapper[4662]: I1208 09:32:12.780714 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8"} err="failed to get container status \"3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8\": rpc error: code = NotFound desc = could not find container \"3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8\": container with ID starting with 3095023eacfcd3b17c0ac8079438e2459d045455bc85dff5e1c5453d015c16f8 not found: ID does not exist" Dec 08 09:32:13 crc kubenswrapper[4662]: I1208 09:32:13.334854 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:13 crc kubenswrapper[4662]: I1208 09:32:13.626142 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c965fbb88-wxgll" event={"ID":"0ad91a9d-af07-430b-985e-64a6077d6267","Type":"ContainerStarted","Data":"b3ad44577c32fc280bf17b5f50d7a8ce605f61f5c1d2ecfa80ebed081376e34f"} Dec 08 09:32:13 crc kubenswrapper[4662]: I1208 09:32:13.628030 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a4d70cc0-8e18-42c6-842b-25ba8229f4e2","Type":"ContainerStarted","Data":"bcba7c1746701752c693dc17f64d5b7c10fa7c5d15894792b2143134bad911dc"} Dec 08 09:32:13 crc kubenswrapper[4662]: I1208 09:32:13.656246 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" event={"ID":"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5","Type":"ContainerStarted","Data":"c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996"} Dec 08 09:32:13 crc kubenswrapper[4662]: I1208 09:32:13.656846 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.688707 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79572439-2bff-4b4f-8208-f9ae14e64f9b","Type":"ContainerStarted","Data":"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680"} Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.725570 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="260b0772-e118-4134-a015-6e0d7180f166" path="/var/lib/kubelet/pods/260b0772-e118-4134-a015-6e0d7180f166/volumes" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.726681 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerStarted","Data":"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2"} Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.726726 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.739812 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6c965fbb88-wxgll" event={"ID":"0ad91a9d-af07-430b-985e-64a6077d6267","Type":"ContainerStarted","Data":"708ae70374bb827d4ad7edc413f4335c4b837b0d18197cb945afd05746587a72"} Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.740702 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.740762 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.752039 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api-log" containerID="cri-o://bcba7c1746701752c693dc17f64d5b7c10fa7c5d15894792b2143134bad911dc" gracePeriod=30 Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.755507 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api" containerID="cri-o://de6bdd347068e8a7c063bb7a8d072308fc12f21e8ae059d4425c30046b69414c" gracePeriod=30 Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.758114 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a4d70cc0-8e18-42c6-842b-25ba8229f4e2","Type":"ContainerStarted","Data":"de6bdd347068e8a7c063bb7a8d072308fc12f21e8ae059d4425c30046b69414c"} Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.758191 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.783813 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" podStartSLOduration=5.783788832 podStartE2EDuration="5.783788832s" podCreationTimestamp="2025-12-08 09:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:13.694637016 +0000 UTC m=+1057.263665006" watchObservedRunningTime="2025-12-08 09:32:14.783788832 +0000 UTC m=+1058.352816822" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.790258 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.790239145 podStartE2EDuration="5.790239145s" podCreationTimestamp="2025-12-08 09:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:14.781178292 +0000 UTC m=+1058.350206282" watchObservedRunningTime="2025-12-08 09:32:14.790239145 +0000 UTC m=+1058.359267135" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.800044 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.858389959 podStartE2EDuration="8.800025687s" podCreationTimestamp="2025-12-08 09:32:06 +0000 UTC" firstStartedPulling="2025-12-08 09:32:07.40836656 +0000 UTC m=+1050.977394550" lastFinishedPulling="2025-12-08 09:32:13.350002288 +0000 UTC m=+1056.919030278" observedRunningTime="2025-12-08 09:32:14.76202414 +0000 UTC m=+1058.331052130" watchObservedRunningTime="2025-12-08 09:32:14.800025687 +0000 UTC m=+1058.369053667" Dec 08 09:32:14 crc kubenswrapper[4662]: I1208 09:32:14.826547 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-6c965fbb88-wxgll" podStartSLOduration=5.8265215569999995 podStartE2EDuration="5.826521557s" podCreationTimestamp="2025-12-08 09:32:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:14.81730877 +0000 UTC m=+1058.386336760" watchObservedRunningTime="2025-12-08 09:32:14.826521557 +0000 UTC m=+1058.395549547" Dec 08 09:32:15 crc kubenswrapper[4662]: I1208 09:32:15.781690 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79572439-2bff-4b4f-8208-f9ae14e64f9b","Type":"ContainerStarted","Data":"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3"} Dec 08 09:32:15 crc kubenswrapper[4662]: I1208 09:32:15.783445 4662 generic.go:334] "Generic (PLEG): container finished" podID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerID="bcba7c1746701752c693dc17f64d5b7c10fa7c5d15894792b2143134bad911dc" exitCode=143 Dec 08 09:32:15 crc kubenswrapper[4662]: I1208 09:32:15.784460 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a4d70cc0-8e18-42c6-842b-25ba8229f4e2","Type":"ContainerDied","Data":"bcba7c1746701752c693dc17f64d5b7c10fa7c5d15894792b2143134bad911dc"} Dec 08 09:32:15 crc kubenswrapper[4662]: I1208 09:32:15.864434 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.59887305 podStartE2EDuration="6.864416399s" podCreationTimestamp="2025-12-08 09:32:09 +0000 UTC" firstStartedPulling="2025-12-08 09:32:12.078791057 +0000 UTC m=+1055.647819047" lastFinishedPulling="2025-12-08 09:32:13.344334406 +0000 UTC m=+1056.913362396" observedRunningTime="2025-12-08 09:32:15.855158001 +0000 UTC m=+1059.424185991" watchObservedRunningTime="2025-12-08 09:32:15.864416399 +0000 UTC m=+1059.433444389" Dec 08 09:32:18 crc kubenswrapper[4662]: I1208 09:32:18.695139 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:18 crc kubenswrapper[4662]: I1208 09:32:18.800336 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.058777 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.431306 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.462974 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.525698 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rx4kl"] Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.525966 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" podUID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerName="dnsmasq-dns" containerID="cri-o://0a1fbe496b26ebc9ed59ac49427a231b67ca9f6f9061b12a5c44656f7512e26a" gracePeriod=10 Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.840885 4662 generic.go:334] "Generic (PLEG): container finished" podID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerID="0a1fbe496b26ebc9ed59ac49427a231b67ca9f6f9061b12a5c44656f7512e26a" exitCode=0 Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.841825 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" event={"ID":"68ba3ed7-3419-4918-bf9e-1ed15b0a2596","Type":"ContainerDied","Data":"0a1fbe496b26ebc9ed59ac49427a231b67ca9f6f9061b12a5c44656f7512e26a"} Dec 08 09:32:20 crc kubenswrapper[4662]: I1208 09:32:20.908657 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.148415 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.278398 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-dns-svc\") pod \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.278688 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-47w5r\" (UniqueName: \"kubernetes.io/projected/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-kube-api-access-47w5r\") pod \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.278800 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-nb\") pod \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.278908 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-config\") pod \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.278927 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-sb\") pod \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\" (UID: \"68ba3ed7-3419-4918-bf9e-1ed15b0a2596\") " Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.300771 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-kube-api-access-47w5r" (OuterVolumeSpecName: "kube-api-access-47w5r") pod "68ba3ed7-3419-4918-bf9e-1ed15b0a2596" (UID: "68ba3ed7-3419-4918-bf9e-1ed15b0a2596"). InnerVolumeSpecName "kube-api-access-47w5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.380825 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-47w5r\" (UniqueName: \"kubernetes.io/projected/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-kube-api-access-47w5r\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.381550 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "68ba3ed7-3419-4918-bf9e-1ed15b0a2596" (UID: "68ba3ed7-3419-4918-bf9e-1ed15b0a2596"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.402215 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "68ba3ed7-3419-4918-bf9e-1ed15b0a2596" (UID: "68ba3ed7-3419-4918-bf9e-1ed15b0a2596"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.427209 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "68ba3ed7-3419-4918-bf9e-1ed15b0a2596" (UID: "68ba3ed7-3419-4918-bf9e-1ed15b0a2596"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.436485 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-config" (OuterVolumeSpecName: "config") pod "68ba3ed7-3419-4918-bf9e-1ed15b0a2596" (UID: "68ba3ed7-3419-4918-bf9e-1ed15b0a2596"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.482483 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.482517 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.482527 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.482535 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/68ba3ed7-3419-4918-bf9e-1ed15b0a2596-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.824842 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.857760 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.857811 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f66db59b9-rx4kl" event={"ID":"68ba3ed7-3419-4918-bf9e-1ed15b0a2596","Type":"ContainerDied","Data":"68c7920cbe137a735ca1ed0c845f0b88565468761e3eef669be353bbd8fcd3a6"} Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.857844 4662 scope.go:117] "RemoveContainer" containerID="0a1fbe496b26ebc9ed59ac49427a231b67ca9f6f9061b12a5c44656f7512e26a" Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.857873 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="cinder-scheduler" containerID="cri-o://835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680" gracePeriod=30 Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.858498 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="probe" containerID="cri-o://6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3" gracePeriod=30 Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.940398 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rx4kl"] Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.944324 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f66db59b9-rx4kl"] Dec 08 09:32:21 crc kubenswrapper[4662]: I1208 09:32:21.954647 4662 scope.go:117] "RemoveContainer" containerID="9eb8845ba776f6e57b60bace9f527d14a610cd4ac59ff086e22879520bdfe72e" Dec 08 09:32:22 crc kubenswrapper[4662]: I1208 09:32:22.716336 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" path="/var/lib/kubelet/pods/68ba3ed7-3419-4918-bf9e-1ed15b0a2596/volumes" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.634306 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.749940 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79572439-2bff-4b4f-8208-f9ae14e64f9b-etc-machine-id\") pod \"79572439-2bff-4b4f-8208-f9ae14e64f9b\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.750036 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-752xp\" (UniqueName: \"kubernetes.io/projected/79572439-2bff-4b4f-8208-f9ae14e64f9b-kube-api-access-752xp\") pod \"79572439-2bff-4b4f-8208-f9ae14e64f9b\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.750069 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data-custom\") pod \"79572439-2bff-4b4f-8208-f9ae14e64f9b\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.750108 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data\") pod \"79572439-2bff-4b4f-8208-f9ae14e64f9b\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.750141 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-combined-ca-bundle\") pod \"79572439-2bff-4b4f-8208-f9ae14e64f9b\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.750184 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-scripts\") pod \"79572439-2bff-4b4f-8208-f9ae14e64f9b\" (UID: \"79572439-2bff-4b4f-8208-f9ae14e64f9b\") " Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.752897 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/79572439-2bff-4b4f-8208-f9ae14e64f9b-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "79572439-2bff-4b4f-8208-f9ae14e64f9b" (UID: "79572439-2bff-4b4f-8208-f9ae14e64f9b"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.762397 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79572439-2bff-4b4f-8208-f9ae14e64f9b-kube-api-access-752xp" (OuterVolumeSpecName: "kube-api-access-752xp") pod "79572439-2bff-4b4f-8208-f9ae14e64f9b" (UID: "79572439-2bff-4b4f-8208-f9ae14e64f9b"). InnerVolumeSpecName "kube-api-access-752xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.765756 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-scripts" (OuterVolumeSpecName: "scripts") pod "79572439-2bff-4b4f-8208-f9ae14e64f9b" (UID: "79572439-2bff-4b4f-8208-f9ae14e64f9b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.768719 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "79572439-2bff-4b4f-8208-f9ae14e64f9b" (UID: "79572439-2bff-4b4f-8208-f9ae14e64f9b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.829096 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.852025 4662 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/79572439-2bff-4b4f-8208-f9ae14e64f9b-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.852065 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-752xp\" (UniqueName: \"kubernetes.io/projected/79572439-2bff-4b4f-8208-f9ae14e64f9b-kube-api-access-752xp\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.852078 4662 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.852091 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.873335 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79572439-2bff-4b4f-8208-f9ae14e64f9b" (UID: "79572439-2bff-4b4f-8208-f9ae14e64f9b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.900963 4662 generic.go:334] "Generic (PLEG): container finished" podID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerID="6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3" exitCode=0 Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.900994 4662 generic.go:334] "Generic (PLEG): container finished" podID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerID="835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680" exitCode=0 Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.901013 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79572439-2bff-4b4f-8208-f9ae14e64f9b","Type":"ContainerDied","Data":"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3"} Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.901040 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79572439-2bff-4b4f-8208-f9ae14e64f9b","Type":"ContainerDied","Data":"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680"} Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.901052 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"79572439-2bff-4b4f-8208-f9ae14e64f9b","Type":"ContainerDied","Data":"9a9757a7b4a7aa742a116bc16993fbcf3b301d9ad431df5dc1588012510c4c65"} Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.901066 4662 scope.go:117] "RemoveContainer" containerID="6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.901174 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.936667 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data" (OuterVolumeSpecName: "config-data") pod "79572439-2bff-4b4f-8208-f9ae14e64f9b" (UID: "79572439-2bff-4b4f-8208-f9ae14e64f9b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.954872 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:23 crc kubenswrapper[4662]: I1208 09:32:23.954900 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79572439-2bff-4b4f-8208-f9ae14e64f9b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.012902 4662 scope.go:117] "RemoveContainer" containerID="835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.046949 4662 scope.go:117] "RemoveContainer" containerID="6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3" Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.050910 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3\": container with ID starting with 6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3 not found: ID does not exist" containerID="6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.050960 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3"} err="failed to get container status \"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3\": rpc error: code = NotFound desc = could not find container \"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3\": container with ID starting with 6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3 not found: ID does not exist" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.050989 4662 scope.go:117] "RemoveContainer" containerID="835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680" Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.052756 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680\": container with ID starting with 835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680 not found: ID does not exist" containerID="835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.052794 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680"} err="failed to get container status \"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680\": rpc error: code = NotFound desc = could not find container \"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680\": container with ID starting with 835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680 not found: ID does not exist" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.052823 4662 scope.go:117] "RemoveContainer" containerID="6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.056927 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3"} err="failed to get container status \"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3\": rpc error: code = NotFound desc = could not find container \"6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3\": container with ID starting with 6b03bc28d29880f78c652547b071f58d94adb26225e1821eff8640510e3c96e3 not found: ID does not exist" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.056982 4662 scope.go:117] "RemoveContainer" containerID="835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.060914 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680"} err="failed to get container status \"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680\": rpc error: code = NotFound desc = could not find container \"835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680\": container with ID starting with 835b1407b2c7ffc2097d446d4fba5b801cc29f306f5a5087c2353323be7bd680 not found: ID does not exist" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.234871 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.245899 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.279660 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.280070 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260b0772-e118-4134-a015-6e0d7180f166" containerName="dnsmasq-dns" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280087 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="260b0772-e118-4134-a015-6e0d7180f166" containerName="dnsmasq-dns" Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.280103 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="260b0772-e118-4134-a015-6e0d7180f166" containerName="init" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280110 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="260b0772-e118-4134-a015-6e0d7180f166" containerName="init" Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.280123 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="probe" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280128 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="probe" Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.280139 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="cinder-scheduler" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280147 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="cinder-scheduler" Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.280157 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerName="init" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280164 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerName="init" Dec 08 09:32:24 crc kubenswrapper[4662]: E1208 09:32:24.280177 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerName="dnsmasq-dns" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280186 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerName="dnsmasq-dns" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280376 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="68ba3ed7-3419-4918-bf9e-1ed15b0a2596" containerName="dnsmasq-dns" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280391 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="cinder-scheduler" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280400 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" containerName="probe" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.280415 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="260b0772-e118-4134-a015-6e0d7180f166" containerName="dnsmasq-dns" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.281481 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.286703 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.296320 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.362067 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.362135 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.362203 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.362223 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f43552a-574c-4fb3-811d-e264f0cec162-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.362251 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.362271 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2stww\" (UniqueName: \"kubernetes.io/projected/3f43552a-574c-4fb3-811d-e264f0cec162-kube-api-access-2stww\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.464659 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.464787 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.464811 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f43552a-574c-4fb3-811d-e264f0cec162-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.464831 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.464852 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2stww\" (UniqueName: \"kubernetes.io/projected/3f43552a-574c-4fb3-811d-e264f0cec162-kube-api-access-2stww\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.464919 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.464914 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3f43552a-574c-4fb3-811d-e264f0cec162-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.473456 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-config-data\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.487346 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.489563 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-scripts\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.491028 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3f43552a-574c-4fb3-811d-e264f0cec162-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.494359 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2stww\" (UniqueName: \"kubernetes.io/projected/3f43552a-574c-4fb3-811d-e264f0cec162-kube-api-access-2stww\") pod \"cinder-scheduler-0\" (UID: \"3f43552a-574c-4fb3-811d-e264f0cec162\") " pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.597489 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.722833 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79572439-2bff-4b4f-8208-f9ae14e64f9b" path="/var/lib/kubelet/pods/79572439-2bff-4b4f-8208-f9ae14e64f9b/volumes" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.817936 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-6c965fbb88-wxgll" Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.938597 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fd78cf54-pjcw8"] Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.944484 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fd78cf54-pjcw8" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api-log" containerID="cri-o://2ef96a4ea6409f3f1df47354f6ea661afdad58a59076951524ac5784e4147712" gracePeriod=30 Dec 08 09:32:24 crc kubenswrapper[4662]: I1208 09:32:24.944856 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-6fd78cf54-pjcw8" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api" containerID="cri-o://f26c40d7b850fc3379360256964fb99f0ff0b14e58189fc610511c56c6d61b09" gracePeriod=30 Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.131881 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.187077 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.258903 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6775bc75d4-c5zmq" Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.429231 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.153:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.430593 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5d95b89f45-hlj4c" Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.515462 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c65c7b9c6-qttgk"] Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.516194 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c65c7b9c6-qttgk" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-api" containerID="cri-o://815602ddb771eb81949d3080aadfb9816e9f10a154a126dafcde60f9edd25a33" gracePeriod=30 Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.516314 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c65c7b9c6-qttgk" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-httpd" containerID="cri-o://02eca89352fc8db4427492de988af39bd978d91dfb33f2bd8c676068678e2807" gracePeriod=30 Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.516028 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6c86ffd5b9-ffgx7" Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.956164 4662 generic.go:334] "Generic (PLEG): container finished" podID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerID="2ef96a4ea6409f3f1df47354f6ea661afdad58a59076951524ac5784e4147712" exitCode=143 Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.957241 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd78cf54-pjcw8" event={"ID":"2fa22efb-c102-4ef9-8176-ccaa4b7563c5","Type":"ContainerDied","Data":"2ef96a4ea6409f3f1df47354f6ea661afdad58a59076951524ac5784e4147712"} Dec 08 09:32:25 crc kubenswrapper[4662]: I1208 09:32:25.966276 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f43552a-574c-4fb3-811d-e264f0cec162","Type":"ContainerStarted","Data":"61f9b098cdd5e8f2969479df04fc8140711a948ef5c1576fefc9a3f27d3f9f6b"} Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.663010 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.667011 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.669852 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.670005 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.670120 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-j8cx2" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.680790 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.734093 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nwcvn\" (UniqueName: \"kubernetes.io/projected/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-kube-api-access-nwcvn\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.734210 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-combined-ca-bundle\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.734331 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-openstack-config-secret\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.734391 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-openstack-config\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.837798 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-combined-ca-bundle\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.837928 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-openstack-config-secret\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.837989 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-openstack-config\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.838060 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nwcvn\" (UniqueName: \"kubernetes.io/projected/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-kube-api-access-nwcvn\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.843440 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-combined-ca-bundle\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.843703 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-openstack-config\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.849035 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-openstack-config-secret\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.872259 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nwcvn\" (UniqueName: \"kubernetes.io/projected/be8af0bd-5d4a-4a83-84f5-5687dfeaab59-kube-api-access-nwcvn\") pod \"openstackclient\" (UID: \"be8af0bd-5d4a-4a83-84f5-5687dfeaab59\") " pod="openstack/openstackclient" Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.981083 4662 generic.go:334] "Generic (PLEG): container finished" podID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerID="02eca89352fc8db4427492de988af39bd978d91dfb33f2bd8c676068678e2807" exitCode=0 Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.981195 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c65c7b9c6-qttgk" event={"ID":"7958f2ae-4e08-4879-8235-8ae34a4ea86b","Type":"ContainerDied","Data":"02eca89352fc8db4427492de988af39bd978d91dfb33f2bd8c676068678e2807"} Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.986905 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f43552a-574c-4fb3-811d-e264f0cec162","Type":"ContainerStarted","Data":"296d2f1b9ee5dc8f0fcf80403de8cdb8a6149c307ae0e5041f388ade3732748c"} Dec 08 09:32:26 crc kubenswrapper[4662]: I1208 09:32:26.987168 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Dec 08 09:32:27 crc kubenswrapper[4662]: I1208 09:32:27.445257 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Dec 08 09:32:27 crc kubenswrapper[4662]: I1208 09:32:27.997301 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"3f43552a-574c-4fb3-811d-e264f0cec162","Type":"ContainerStarted","Data":"c1475c69b494b15b58cd4b29be8893ae4ee54cade4b609173a81fc59a3c1e27c"} Dec 08 09:32:27 crc kubenswrapper[4662]: I1208 09:32:27.998731 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"be8af0bd-5d4a-4a83-84f5-5687dfeaab59","Type":"ContainerStarted","Data":"7924dcbaf9828a9921fcf4946b9d2bd15ad660ebb64f7eec7dfb7ab02ce2b6fd"} Dec 08 09:32:28 crc kubenswrapper[4662]: I1208 09:32:28.025684 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=4.025669428 podStartE2EDuration="4.025669428s" podCreationTimestamp="2025-12-08 09:32:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:28.021170608 +0000 UTC m=+1071.590198598" watchObservedRunningTime="2025-12-08 09:32:28.025669428 +0000 UTC m=+1071.594697408" Dec 08 09:32:28 crc kubenswrapper[4662]: I1208 09:32:28.555951 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fd78cf54-pjcw8" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": read tcp 10.217.0.2:44974->10.217.0.150:9311: read: connection reset by peer" Dec 08 09:32:28 crc kubenswrapper[4662]: I1208 09:32:28.556070 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-6fd78cf54-pjcw8" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.150:9311/healthcheck\": read tcp 10.217.0.2:44982->10.217.0.150:9311: read: connection reset by peer" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.027879 4662 generic.go:334] "Generic (PLEG): container finished" podID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerID="f26c40d7b850fc3379360256964fb99f0ff0b14e58189fc610511c56c6d61b09" exitCode=0 Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.028930 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd78cf54-pjcw8" event={"ID":"2fa22efb-c102-4ef9-8176-ccaa4b7563c5","Type":"ContainerDied","Data":"f26c40d7b850fc3379360256964fb99f0ff0b14e58189fc610511c56c6d61b09"} Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.028960 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-6fd78cf54-pjcw8" event={"ID":"2fa22efb-c102-4ef9-8176-ccaa4b7563c5","Type":"ContainerDied","Data":"0dc0cf85df391988ed25141b43eb2f24eebff2ac0f4a6308468d7866861526fa"} Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.028975 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dc0cf85df391988ed25141b43eb2f24eebff2ac0f4a6308468d7866861526fa" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.036943 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.085989 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data-custom\") pod \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.086032 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-logs\") pod \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.086097 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-combined-ca-bundle\") pod \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.086130 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttmc4\" (UniqueName: \"kubernetes.io/projected/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-kube-api-access-ttmc4\") pod \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.086203 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data\") pod \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\" (UID: \"2fa22efb-c102-4ef9-8176-ccaa4b7563c5\") " Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.087877 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-logs" (OuterVolumeSpecName: "logs") pod "2fa22efb-c102-4ef9-8176-ccaa4b7563c5" (UID: "2fa22efb-c102-4ef9-8176-ccaa4b7563c5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.100114 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "2fa22efb-c102-4ef9-8176-ccaa4b7563c5" (UID: "2fa22efb-c102-4ef9-8176-ccaa4b7563c5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.100608 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-kube-api-access-ttmc4" (OuterVolumeSpecName: "kube-api-access-ttmc4") pod "2fa22efb-c102-4ef9-8176-ccaa4b7563c5" (UID: "2fa22efb-c102-4ef9-8176-ccaa4b7563c5"). InnerVolumeSpecName "kube-api-access-ttmc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.181348 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2fa22efb-c102-4ef9-8176-ccaa4b7563c5" (UID: "2fa22efb-c102-4ef9-8176-ccaa4b7563c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.187704 4662 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.187728 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.187754 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.187763 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ttmc4\" (UniqueName: \"kubernetes.io/projected/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-kube-api-access-ttmc4\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.216887 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data" (OuterVolumeSpecName: "config-data") pod "2fa22efb-c102-4ef9-8176-ccaa4b7563c5" (UID: "2fa22efb-c102-4ef9-8176-ccaa4b7563c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.289764 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2fa22efb-c102-4ef9-8176-ccaa4b7563c5-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.424256 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 08 09:32:29 crc kubenswrapper[4662]: I1208 09:32:29.598542 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Dec 08 09:32:30 crc kubenswrapper[4662]: I1208 09:32:30.043805 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-6fd78cf54-pjcw8" Dec 08 09:32:30 crc kubenswrapper[4662]: I1208 09:32:30.098619 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-6fd78cf54-pjcw8"] Dec 08 09:32:30 crc kubenswrapper[4662]: I1208 09:32:30.105083 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-6fd78cf54-pjcw8"] Dec 08 09:32:30 crc kubenswrapper[4662]: I1208 09:32:30.720996 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" path="/var/lib/kubelet/pods/2fa22efb-c102-4ef9-8176-ccaa4b7563c5/volumes" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.079370 4662 generic.go:334] "Generic (PLEG): container finished" podID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerID="815602ddb771eb81949d3080aadfb9816e9f10a154a126dafcde60f9edd25a33" exitCode=0 Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.079592 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c65c7b9c6-qttgk" event={"ID":"7958f2ae-4e08-4879-8235-8ae34a4ea86b","Type":"ContainerDied","Data":"815602ddb771eb81949d3080aadfb9816e9f10a154a126dafcde60f9edd25a33"} Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.337235 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.442255 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-ovndb-tls-certs\") pod \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.442311 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-httpd-config\") pod \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.442396 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w45fx\" (UniqueName: \"kubernetes.io/projected/7958f2ae-4e08-4879-8235-8ae34a4ea86b-kube-api-access-w45fx\") pod \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.442443 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-config\") pod \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.442474 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-combined-ca-bundle\") pod \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\" (UID: \"7958f2ae-4e08-4879-8235-8ae34a4ea86b\") " Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.466814 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "7958f2ae-4e08-4879-8235-8ae34a4ea86b" (UID: "7958f2ae-4e08-4879-8235-8ae34a4ea86b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.487971 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7958f2ae-4e08-4879-8235-8ae34a4ea86b-kube-api-access-w45fx" (OuterVolumeSpecName: "kube-api-access-w45fx") pod "7958f2ae-4e08-4879-8235-8ae34a4ea86b" (UID: "7958f2ae-4e08-4879-8235-8ae34a4ea86b"). InnerVolumeSpecName "kube-api-access-w45fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.548444 4662 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-httpd-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.548479 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w45fx\" (UniqueName: \"kubernetes.io/projected/7958f2ae-4e08-4879-8235-8ae34a4ea86b-kube-api-access-w45fx\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.552443 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-config" (OuterVolumeSpecName: "config") pod "7958f2ae-4e08-4879-8235-8ae34a4ea86b" (UID: "7958f2ae-4e08-4879-8235-8ae34a4ea86b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.572688 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "7958f2ae-4e08-4879-8235-8ae34a4ea86b" (UID: "7958f2ae-4e08-4879-8235-8ae34a4ea86b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.575848 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7958f2ae-4e08-4879-8235-8ae34a4ea86b" (UID: "7958f2ae-4e08-4879-8235-8ae34a4ea86b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.611539 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.611823 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.649829 4662 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.649853 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:32 crc kubenswrapper[4662]: I1208 09:32:32.649863 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7958f2ae-4e08-4879-8235-8ae34a4ea86b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:33 crc kubenswrapper[4662]: I1208 09:32:33.088547 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c65c7b9c6-qttgk" event={"ID":"7958f2ae-4e08-4879-8235-8ae34a4ea86b","Type":"ContainerDied","Data":"0c418e0cc5b323054b3696b9f20f04ea26d80fe94725567b898380887726aae5"} Dec 08 09:32:33 crc kubenswrapper[4662]: I1208 09:32:33.088596 4662 scope.go:117] "RemoveContainer" containerID="02eca89352fc8db4427492de988af39bd978d91dfb33f2bd8c676068678e2807" Dec 08 09:32:33 crc kubenswrapper[4662]: I1208 09:32:33.088605 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c65c7b9c6-qttgk" Dec 08 09:32:33 crc kubenswrapper[4662]: I1208 09:32:33.114015 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c65c7b9c6-qttgk"] Dec 08 09:32:33 crc kubenswrapper[4662]: I1208 09:32:33.126825 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c65c7b9c6-qttgk"] Dec 08 09:32:33 crc kubenswrapper[4662]: I1208 09:32:33.135819 4662 scope.go:117] "RemoveContainer" containerID="815602ddb771eb81949d3080aadfb9816e9f10a154a126dafcde60f9edd25a33" Dec 08 09:32:34 crc kubenswrapper[4662]: I1208 09:32:34.709576 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" path="/var/lib/kubelet/pods/7958f2ae-4e08-4879-8235-8ae34a4ea86b/volumes" Dec 08 09:32:34 crc kubenswrapper[4662]: I1208 09:32:34.838614 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Dec 08 09:32:36 crc kubenswrapper[4662]: I1208 09:32:36.784553 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.682429 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bmvjs"] Dec 08 09:32:38 crc kubenswrapper[4662]: E1208 09:32:38.685984 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.687486 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api" Dec 08 09:32:38 crc kubenswrapper[4662]: E1208 09:32:38.687581 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-httpd" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.687634 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-httpd" Dec 08 09:32:38 crc kubenswrapper[4662]: E1208 09:32:38.687708 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-api" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.687793 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-api" Dec 08 09:32:38 crc kubenswrapper[4662]: E1208 09:32:38.687849 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api-log" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.687907 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api-log" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.688127 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api-log" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.688214 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-api" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.688295 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fa22efb-c102-4ef9-8176-ccaa4b7563c5" containerName="barbican-api" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.688353 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7958f2ae-4e08-4879-8235-8ae34a4ea86b" containerName="neutron-httpd" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.689012 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.696487 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bmvjs"] Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.777275 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-d829v"] Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.781975 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8409e9a3-56e3-49a4-b270-ee8a2493fa75-operator-scripts\") pod \"nova-api-db-create-bmvjs\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.782041 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmvj\" (UniqueName: \"kubernetes.io/projected/8409e9a3-56e3-49a4-b270-ee8a2493fa75-kube-api-access-pxmvj\") pod \"nova-api-db-create-bmvjs\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.791255 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d829v"] Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.791358 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.887501 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4aa2f3-f0db-4855-b972-e077877518c6-operator-scripts\") pod \"nova-cell0-db-create-d829v\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.887553 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwzfr\" (UniqueName: \"kubernetes.io/projected/dd4aa2f3-f0db-4855-b972-e077877518c6-kube-api-access-hwzfr\") pod \"nova-cell0-db-create-d829v\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.887651 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8409e9a3-56e3-49a4-b270-ee8a2493fa75-operator-scripts\") pod \"nova-api-db-create-bmvjs\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.887683 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmvj\" (UniqueName: \"kubernetes.io/projected/8409e9a3-56e3-49a4-b270-ee8a2493fa75-kube-api-access-pxmvj\") pod \"nova-api-db-create-bmvjs\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.888812 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8409e9a3-56e3-49a4-b270-ee8a2493fa75-operator-scripts\") pod \"nova-api-db-create-bmvjs\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.890768 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-vpsr8"] Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.891998 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.908955 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vpsr8"] Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.953619 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmvj\" (UniqueName: \"kubernetes.io/projected/8409e9a3-56e3-49a4-b270-ee8a2493fa75-kube-api-access-pxmvj\") pod \"nova-api-db-create-bmvjs\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.988605 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2db14df8-28f0-42fc-9891-361afd774445-operator-scripts\") pod \"nova-cell1-db-create-vpsr8\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.988658 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4aa2f3-f0db-4855-b972-e077877518c6-operator-scripts\") pod \"nova-cell0-db-create-d829v\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.988682 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwzfr\" (UniqueName: \"kubernetes.io/projected/dd4aa2f3-f0db-4855-b972-e077877518c6-kube-api-access-hwzfr\") pod \"nova-cell0-db-create-d829v\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.988757 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jfl\" (UniqueName: \"kubernetes.io/projected/2db14df8-28f0-42fc-9891-361afd774445-kube-api-access-v2jfl\") pod \"nova-cell1-db-create-vpsr8\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.991278 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4aa2f3-f0db-4855-b972-e077877518c6-operator-scripts\") pod \"nova-cell0-db-create-d829v\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:38 crc kubenswrapper[4662]: I1208 09:32:38.999779 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-c281-account-create-update-gjq7l"] Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.000951 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.004225 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.010326 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwzfr\" (UniqueName: \"kubernetes.io/projected/dd4aa2f3-f0db-4855-b972-e077877518c6-kube-api-access-hwzfr\") pod \"nova-cell0-db-create-d829v\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.015334 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c281-account-create-update-gjq7l"] Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.017968 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.090660 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jfl\" (UniqueName: \"kubernetes.io/projected/2db14df8-28f0-42fc-9891-361afd774445-kube-api-access-v2jfl\") pod \"nova-cell1-db-create-vpsr8\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.091044 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk2dm\" (UniqueName: \"kubernetes.io/projected/24937f1a-fdea-4383-9c39-885ee36af08c-kube-api-access-bk2dm\") pod \"nova-api-c281-account-create-update-gjq7l\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.091109 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24937f1a-fdea-4383-9c39-885ee36af08c-operator-scripts\") pod \"nova-api-c281-account-create-update-gjq7l\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.091219 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2db14df8-28f0-42fc-9891-361afd774445-operator-scripts\") pod \"nova-cell1-db-create-vpsr8\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.092122 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2db14df8-28f0-42fc-9891-361afd774445-operator-scripts\") pod \"nova-cell1-db-create-vpsr8\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.112555 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jfl\" (UniqueName: \"kubernetes.io/projected/2db14df8-28f0-42fc-9891-361afd774445-kube-api-access-v2jfl\") pod \"nova-cell1-db-create-vpsr8\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.114538 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.186171 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-f69b-account-create-update-q4s95"] Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.187670 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.190598 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.192414 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-operator-scripts\") pod \"nova-cell0-f69b-account-create-update-q4s95\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.192460 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5v7w\" (UniqueName: \"kubernetes.io/projected/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-kube-api-access-j5v7w\") pod \"nova-cell0-f69b-account-create-update-q4s95\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.192579 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk2dm\" (UniqueName: \"kubernetes.io/projected/24937f1a-fdea-4383-9c39-885ee36af08c-kube-api-access-bk2dm\") pod \"nova-api-c281-account-create-update-gjq7l\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.192669 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24937f1a-fdea-4383-9c39-885ee36af08c-operator-scripts\") pod \"nova-api-c281-account-create-update-gjq7l\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.193413 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24937f1a-fdea-4383-9c39-885ee36af08c-operator-scripts\") pod \"nova-api-c281-account-create-update-gjq7l\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.198028 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f69b-account-create-update-q4s95"] Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.207809 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.226311 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk2dm\" (UniqueName: \"kubernetes.io/projected/24937f1a-fdea-4383-9c39-885ee36af08c-kube-api-access-bk2dm\") pod \"nova-api-c281-account-create-update-gjq7l\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.294287 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-operator-scripts\") pod \"nova-cell0-f69b-account-create-update-q4s95\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.294351 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5v7w\" (UniqueName: \"kubernetes.io/projected/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-kube-api-access-j5v7w\") pod \"nova-cell0-f69b-account-create-update-q4s95\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.295471 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-operator-scripts\") pod \"nova-cell0-f69b-account-create-update-q4s95\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.312442 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5v7w\" (UniqueName: \"kubernetes.io/projected/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-kube-api-access-j5v7w\") pod \"nova-cell0-f69b-account-create-update-q4s95\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.359686 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.383345 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-0230-account-create-update-mqp6s"] Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.384334 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.388905 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.391854 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0230-account-create-update-mqp6s"] Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.395690 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7sbn\" (UniqueName: \"kubernetes.io/projected/433f7180-bb32-4bf5-b1d2-c75388f8011d-kube-api-access-x7sbn\") pod \"nova-cell1-0230-account-create-update-mqp6s\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.396289 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433f7180-bb32-4bf5-b1d2-c75388f8011d-operator-scripts\") pod \"nova-cell1-0230-account-create-update-mqp6s\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.499399 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7sbn\" (UniqueName: \"kubernetes.io/projected/433f7180-bb32-4bf5-b1d2-c75388f8011d-kube-api-access-x7sbn\") pod \"nova-cell1-0230-account-create-update-mqp6s\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.499515 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433f7180-bb32-4bf5-b1d2-c75388f8011d-operator-scripts\") pod \"nova-cell1-0230-account-create-update-mqp6s\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.500346 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433f7180-bb32-4bf5-b1d2-c75388f8011d-operator-scripts\") pod \"nova-cell1-0230-account-create-update-mqp6s\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.515640 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7sbn\" (UniqueName: \"kubernetes.io/projected/433f7180-bb32-4bf5-b1d2-c75388f8011d-kube-api-access-x7sbn\") pod \"nova-cell1-0230-account-create-update-mqp6s\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.516514 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:39 crc kubenswrapper[4662]: I1208 09:32:39.707837 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.209871 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"be8af0bd-5d4a-4a83-84f5-5687dfeaab59","Type":"ContainerStarted","Data":"35f6b9dac8f3e48bb3039738889c4be219dde86b392b4a96e995ac23aceb8bd4"} Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.248476 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.085953791 podStartE2EDuration="15.248458293s" podCreationTimestamp="2025-12-08 09:32:26 +0000 UTC" firstStartedPulling="2025-12-08 09:32:27.462840166 +0000 UTC m=+1071.031868156" lastFinishedPulling="2025-12-08 09:32:40.625344668 +0000 UTC m=+1084.194372658" observedRunningTime="2025-12-08 09:32:41.236222286 +0000 UTC m=+1084.805250276" watchObservedRunningTime="2025-12-08 09:32:41.248458293 +0000 UTC m=+1084.817486273" Dec 08 09:32:41 crc kubenswrapper[4662]: W1208 09:32:41.259906 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8409e9a3_56e3_49a4_b270_ee8a2493fa75.slice/crio-63019b037a56ffb5e6ebc87d6f1e690473549f6a7d3a30122c2369972b25f52e WatchSource:0}: Error finding container 63019b037a56ffb5e6ebc87d6f1e690473549f6a7d3a30122c2369972b25f52e: Status 404 returned error can't find the container with id 63019b037a56ffb5e6ebc87d6f1e690473549f6a7d3a30122c2369972b25f52e Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.298891 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bmvjs"] Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.353651 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-0230-account-create-update-mqp6s"] Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.372493 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-f69b-account-create-update-q4s95"] Dec 08 09:32:41 crc kubenswrapper[4662]: W1208 09:32:41.507192 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd4aa2f3_f0db_4855_b972_e077877518c6.slice/crio-73a46b94dd2f9bd010ec77376422089da1461f358d05250cacb9ba4a77d1a926 WatchSource:0}: Error finding container 73a46b94dd2f9bd010ec77376422089da1461f358d05250cacb9ba4a77d1a926: Status 404 returned error can't find the container with id 73a46b94dd2f9bd010ec77376422089da1461f358d05250cacb9ba4a77d1a926 Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.519732 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-d829v"] Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.534998 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-vpsr8"] Dec 08 09:32:41 crc kubenswrapper[4662]: I1208 09:32:41.706300 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-c281-account-create-update-gjq7l"] Dec 08 09:32:41 crc kubenswrapper[4662]: W1208 09:32:41.710199 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24937f1a_fdea_4383_9c39_885ee36af08c.slice/crio-84dc7b5e61381dfc87e8428252bce94283af7fb8eec01c22aa332983bc2b627c WatchSource:0}: Error finding container 84dc7b5e61381dfc87e8428252bce94283af7fb8eec01c22aa332983bc2b627c: Status 404 returned error can't find the container with id 84dc7b5e61381dfc87e8428252bce94283af7fb8eec01c22aa332983bc2b627c Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.167768 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.168190 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-central-agent" containerID="cri-o://486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" gracePeriod=30 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.168299 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="proxy-httpd" containerID="cri-o://5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" gracePeriod=30 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.168266 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="sg-core" containerID="cri-o://decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" gracePeriod=30 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.168555 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-notification-agent" containerID="cri-o://edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" gracePeriod=30 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.225061 4662 generic.go:334] "Generic (PLEG): container finished" podID="dd4aa2f3-f0db-4855-b972-e077877518c6" containerID="691ff21b5113c8d07d1dcea6a0edc95bddef5fdadfcb0747546fe9d0c0116cee" exitCode=0 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.225160 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d829v" event={"ID":"dd4aa2f3-f0db-4855-b972-e077877518c6","Type":"ContainerDied","Data":"691ff21b5113c8d07d1dcea6a0edc95bddef5fdadfcb0747546fe9d0c0116cee"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.225186 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d829v" event={"ID":"dd4aa2f3-f0db-4855-b972-e077877518c6","Type":"ContainerStarted","Data":"73a46b94dd2f9bd010ec77376422089da1461f358d05250cacb9ba4a77d1a926"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.228290 4662 generic.go:334] "Generic (PLEG): container finished" podID="24937f1a-fdea-4383-9c39-885ee36af08c" containerID="83ea04b4b478f25e9cd3eb5fb31f0eeeac687aa6085a5b6fd6865d2f8070578e" exitCode=0 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.228400 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c281-account-create-update-gjq7l" event={"ID":"24937f1a-fdea-4383-9c39-885ee36af08c","Type":"ContainerDied","Data":"83ea04b4b478f25e9cd3eb5fb31f0eeeac687aa6085a5b6fd6865d2f8070578e"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.228430 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c281-account-create-update-gjq7l" event={"ID":"24937f1a-fdea-4383-9c39-885ee36af08c","Type":"ContainerStarted","Data":"84dc7b5e61381dfc87e8428252bce94283af7fb8eec01c22aa332983bc2b627c"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.230397 4662 generic.go:334] "Generic (PLEG): container finished" podID="433f7180-bb32-4bf5-b1d2-c75388f8011d" containerID="cf96b00d2bdd8da17c0a878be900be2ec1561eccfd406dc199e3bd02da6acbe2" exitCode=0 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.230467 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0230-account-create-update-mqp6s" event={"ID":"433f7180-bb32-4bf5-b1d2-c75388f8011d","Type":"ContainerDied","Data":"cf96b00d2bdd8da17c0a878be900be2ec1561eccfd406dc199e3bd02da6acbe2"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.230492 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0230-account-create-update-mqp6s" event={"ID":"433f7180-bb32-4bf5-b1d2-c75388f8011d","Type":"ContainerStarted","Data":"dc826e523fd08df33e7bced3a073a4b1b18d16494890ec56f2dc0c11caa692e6"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.231575 4662 generic.go:334] "Generic (PLEG): container finished" podID="c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9" containerID="5666cf5452ee905ac68ada7f3892549c2edeb8bb0887c3e2988493820d357040" exitCode=0 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.231629 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f69b-account-create-update-q4s95" event={"ID":"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9","Type":"ContainerDied","Data":"5666cf5452ee905ac68ada7f3892549c2edeb8bb0887c3e2988493820d357040"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.231668 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f69b-account-create-update-q4s95" event={"ID":"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9","Type":"ContainerStarted","Data":"85cb657e9b0452c0c3ecce2f9c8bca1c48b3bb18c898d84cc1427e8af41c6051"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.234290 4662 generic.go:334] "Generic (PLEG): container finished" podID="2db14df8-28f0-42fc-9891-361afd774445" containerID="e42c1551583bc477936dddba8be4f7a53c2e78749fc95eb0becf2910232976c0" exitCode=0 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.234349 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpsr8" event={"ID":"2db14df8-28f0-42fc-9891-361afd774445","Type":"ContainerDied","Data":"e42c1551583bc477936dddba8be4f7a53c2e78749fc95eb0becf2910232976c0"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.234372 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpsr8" event={"ID":"2db14df8-28f0-42fc-9891-361afd774445","Type":"ContainerStarted","Data":"4227cf5762c31ce8cc456ea80f86dc946f33d3edd57161f6a97c5c4be8b1c2b0"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.236317 4662 generic.go:334] "Generic (PLEG): container finished" podID="8409e9a3-56e3-49a4-b270-ee8a2493fa75" containerID="d55f2ed09163d68f4e9c7180a9aa7b91dc839496296c1d3288141dde11f63ea1" exitCode=0 Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.236459 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bmvjs" event={"ID":"8409e9a3-56e3-49a4-b270-ee8a2493fa75","Type":"ContainerDied","Data":"d55f2ed09163d68f4e9c7180a9aa7b91dc839496296c1d3288141dde11f63ea1"} Dec 08 09:32:42 crc kubenswrapper[4662]: I1208 09:32:42.236509 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bmvjs" event={"ID":"8409e9a3-56e3-49a4-b270-ee8a2493fa75","Type":"ContainerStarted","Data":"63019b037a56ffb5e6ebc87d6f1e690473549f6a7d3a30122c2369972b25f52e"} Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.217052 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.277273 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-sg-core-conf-yaml\") pod \"ae70464a-32e1-41cc-b173-85019c99d562\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.277324 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-log-httpd\") pod \"ae70464a-32e1-41cc-b173-85019c99d562\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.277357 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-config-data\") pod \"ae70464a-32e1-41cc-b173-85019c99d562\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.277379 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-scripts\") pod \"ae70464a-32e1-41cc-b173-85019c99d562\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.277428 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-run-httpd\") pod \"ae70464a-32e1-41cc-b173-85019c99d562\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.277473 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jc6wn\" (UniqueName: \"kubernetes.io/projected/ae70464a-32e1-41cc-b173-85019c99d562-kube-api-access-jc6wn\") pod \"ae70464a-32e1-41cc-b173-85019c99d562\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.277504 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-combined-ca-bundle\") pod \"ae70464a-32e1-41cc-b173-85019c99d562\" (UID: \"ae70464a-32e1-41cc-b173-85019c99d562\") " Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.282528 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "ae70464a-32e1-41cc-b173-85019c99d562" (UID: "ae70464a-32e1-41cc-b173-85019c99d562"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.282881 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "ae70464a-32e1-41cc-b173-85019c99d562" (UID: "ae70464a-32e1-41cc-b173-85019c99d562"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.283782 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-scripts" (OuterVolumeSpecName: "scripts") pod "ae70464a-32e1-41cc-b173-85019c99d562" (UID: "ae70464a-32e1-41cc-b173-85019c99d562"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.285563 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.293858 4662 generic.go:334] "Generic (PLEG): container finished" podID="ae70464a-32e1-41cc-b173-85019c99d562" containerID="5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" exitCode=0 Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.293922 4662 generic.go:334] "Generic (PLEG): container finished" podID="ae70464a-32e1-41cc-b173-85019c99d562" containerID="decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" exitCode=2 Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.293932 4662 generic.go:334] "Generic (PLEG): container finished" podID="ae70464a-32e1-41cc-b173-85019c99d562" containerID="edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" exitCode=0 Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.293941 4662 generic.go:334] "Generic (PLEG): container finished" podID="ae70464a-32e1-41cc-b173-85019c99d562" containerID="486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" exitCode=0 Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.294179 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerDied","Data":"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2"} Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.294275 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerDied","Data":"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965"} Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.294291 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerDied","Data":"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a"} Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.294313 4662 scope.go:117] "RemoveContainer" containerID="5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.294323 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerDied","Data":"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb"} Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.294587 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"ae70464a-32e1-41cc-b173-85019c99d562","Type":"ContainerDied","Data":"5afb20f95751e1c9d7291d73fa029970d72585120405dc11035276cf16221589"} Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.296347 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae70464a-32e1-41cc-b173-85019c99d562-kube-api-access-jc6wn" (OuterVolumeSpecName: "kube-api-access-jc6wn") pod "ae70464a-32e1-41cc-b173-85019c99d562" (UID: "ae70464a-32e1-41cc-b173-85019c99d562"). InnerVolumeSpecName "kube-api-access-jc6wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.354128 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "ae70464a-32e1-41cc-b173-85019c99d562" (UID: "ae70464a-32e1-41cc-b173-85019c99d562"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.382492 4662 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.382554 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jc6wn\" (UniqueName: \"kubernetes.io/projected/ae70464a-32e1-41cc-b173-85019c99d562-kube-api-access-jc6wn\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.382567 4662 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.382576 4662 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/ae70464a-32e1-41cc-b173-85019c99d562-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.382585 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.483993 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-config-data" (OuterVolumeSpecName: "config-data") pod "ae70464a-32e1-41cc-b173-85019c99d562" (UID: "ae70464a-32e1-41cc-b173-85019c99d562"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.486116 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.493001 4662 scope.go:117] "RemoveContainer" containerID="decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.493199 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae70464a-32e1-41cc-b173-85019c99d562" (UID: "ae70464a-32e1-41cc-b173-85019c99d562"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.548943 4662 scope.go:117] "RemoveContainer" containerID="edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.595181 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae70464a-32e1-41cc-b173-85019c99d562-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.613908 4662 scope.go:117] "RemoveContainer" containerID="486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.689810 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.757883 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.768819 4662 scope.go:117] "RemoveContainer" containerID="5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.776597 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.777055 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-notification-agent" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777076 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-notification-agent" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.777106 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="proxy-httpd" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777117 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="proxy-httpd" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.777141 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="sg-core" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777148 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="sg-core" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.777168 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-central-agent" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777177 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-central-agent" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777394 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-notification-agent" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777435 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="proxy-httpd" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777446 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="sg-core" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777461 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae70464a-32e1-41cc-b173-85019c99d562" containerName="ceilometer-central-agent" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.777773 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": container with ID starting with 5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2 not found: ID does not exist" containerID="5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777834 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2"} err="failed to get container status \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": rpc error: code = NotFound desc = could not find container \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": container with ID starting with 5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.777873 4662 scope.go:117] "RemoveContainer" containerID="decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.778613 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": container with ID starting with decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965 not found: ID does not exist" containerID="decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.778729 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965"} err="failed to get container status \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": rpc error: code = NotFound desc = could not find container \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": container with ID starting with decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.778850 4662 scope.go:117] "RemoveContainer" containerID="edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.779198 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": container with ID starting with edc476a735721523ff36fc0887d888830778506821183539396509c91890904a not found: ID does not exist" containerID="edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.779300 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a"} err="failed to get container status \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": rpc error: code = NotFound desc = could not find container \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": container with ID starting with edc476a735721523ff36fc0887d888830778506821183539396509c91890904a not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.779381 4662 scope.go:117] "RemoveContainer" containerID="486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.779628 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": container with ID starting with 486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb not found: ID does not exist" containerID="486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.779734 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb"} err="failed to get container status \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": rpc error: code = NotFound desc = could not find container \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": container with ID starting with 486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.779836 4662 scope.go:117] "RemoveContainer" containerID="5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.780121 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2"} err="failed to get container status \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": rpc error: code = NotFound desc = could not find container \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": container with ID starting with 5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.780218 4662 scope.go:117] "RemoveContainer" containerID="decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.780570 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.781786 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965"} err="failed to get container status \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": rpc error: code = NotFound desc = could not find container \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": container with ID starting with decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.781822 4662 scope.go:117] "RemoveContainer" containerID="edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.782122 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a"} err="failed to get container status \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": rpc error: code = NotFound desc = could not find container \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": container with ID starting with edc476a735721523ff36fc0887d888830778506821183539396509c91890904a not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.782145 4662 scope.go:117] "RemoveContainer" containerID="486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.799808 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb"} err="failed to get container status \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": rpc error: code = NotFound desc = could not find container \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": container with ID starting with 486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.799862 4662 scope.go:117] "RemoveContainer" containerID="5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.801008 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2"} err="failed to get container status \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": rpc error: code = NotFound desc = could not find container \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": container with ID starting with 5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.801031 4662 scope.go:117] "RemoveContainer" containerID="decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.801238 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965"} err="failed to get container status \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": rpc error: code = NotFound desc = could not find container \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": container with ID starting with decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.801257 4662 scope.go:117] "RemoveContainer" containerID="edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.801405 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.802591 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a"} err="failed to get container status \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": rpc error: code = NotFound desc = could not find container \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": container with ID starting with edc476a735721523ff36fc0887d888830778506821183539396509c91890904a not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.802612 4662 scope.go:117] "RemoveContainer" containerID="486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.802853 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb"} err="failed to get container status \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": rpc error: code = NotFound desc = could not find container \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": container with ID starting with 486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.802872 4662 scope.go:117] "RemoveContainer" containerID="5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.802925 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.803971 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.804260 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2"} err="failed to get container status \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": rpc error: code = NotFound desc = could not find container \"5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2\": container with ID starting with 5d9cbeade8f82cd180eec0a4c7f0045dbb994590a1f05108260fa698d3d907f2 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.804290 4662 scope.go:117] "RemoveContainer" containerID="decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.809018 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965"} err="failed to get container status \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": rpc error: code = NotFound desc = could not find container \"decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965\": container with ID starting with decc27b9378acf4d8a76a2c26ebc56143c1388e7e60f22aa4245a260d8dcf965 not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.809221 4662 scope.go:117] "RemoveContainer" containerID="edc476a735721523ff36fc0887d888830778506821183539396509c91890904a" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.811534 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a"} err="failed to get container status \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": rpc error: code = NotFound desc = could not find container \"edc476a735721523ff36fc0887d888830778506821183539396509c91890904a\": container with ID starting with edc476a735721523ff36fc0887d888830778506821183539396509c91890904a not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.811802 4662 scope.go:117] "RemoveContainer" containerID="486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.813711 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.815585 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb"} err="failed to get container status \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": rpc error: code = NotFound desc = could not find container \"486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb\": container with ID starting with 486ef4506308bc854b2edb00653f72a65b230e30d447a75c9574a96b043060eb not found: ID does not exist" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.829022 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzkcp\" (UniqueName: \"kubernetes.io/projected/d258d8f4-56f3-4ab1-8361-882aa7237b39-kube-api-access-hzkcp\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.829077 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-log-httpd\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.829142 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-config-data\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.829193 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.829213 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-run-httpd\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.829227 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-scripts\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.829255 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: E1208 09:32:43.858255 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle config-data kube-api-access-hzkcp log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="d258d8f4-56f3-4ab1-8361-882aa7237b39" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.930793 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzkcp\" (UniqueName: \"kubernetes.io/projected/d258d8f4-56f3-4ab1-8361-882aa7237b39-kube-api-access-hzkcp\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.933773 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-log-httpd\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.934034 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-config-data\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.934181 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.934250 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-run-httpd\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.934339 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-scripts\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.934439 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.937648 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-log-httpd\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.938859 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-run-httpd\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.940008 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.940708 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.942566 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-config-data\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.947560 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-scripts\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.965568 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzkcp\" (UniqueName: \"kubernetes.io/projected/d258d8f4-56f3-4ab1-8361-882aa7237b39-kube-api-access-hzkcp\") pod \"ceilometer-0\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " pod="openstack/ceilometer-0" Dec 08 09:32:43 crc kubenswrapper[4662]: I1208 09:32:43.966458 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.037329 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24937f1a-fdea-4383-9c39-885ee36af08c-operator-scripts\") pod \"24937f1a-fdea-4383-9c39-885ee36af08c\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.037634 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bk2dm\" (UniqueName: \"kubernetes.io/projected/24937f1a-fdea-4383-9c39-885ee36af08c-kube-api-access-bk2dm\") pod \"24937f1a-fdea-4383-9c39-885ee36af08c\" (UID: \"24937f1a-fdea-4383-9c39-885ee36af08c\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.038527 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24937f1a-fdea-4383-9c39-885ee36af08c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24937f1a-fdea-4383-9c39-885ee36af08c" (UID: "24937f1a-fdea-4383-9c39-885ee36af08c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.046891 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24937f1a-fdea-4383-9c39-885ee36af08c-kube-api-access-bk2dm" (OuterVolumeSpecName: "kube-api-access-bk2dm") pod "24937f1a-fdea-4383-9c39-885ee36af08c" (UID: "24937f1a-fdea-4383-9c39-885ee36af08c"). InnerVolumeSpecName "kube-api-access-bk2dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.140091 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24937f1a-fdea-4383-9c39-885ee36af08c-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.140127 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bk2dm\" (UniqueName: \"kubernetes.io/projected/24937f1a-fdea-4383-9c39-885ee36af08c-kube-api-access-bk2dm\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.335019 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-0230-account-create-update-mqp6s" event={"ID":"433f7180-bb32-4bf5-b1d2-c75388f8011d","Type":"ContainerDied","Data":"dc826e523fd08df33e7bced3a073a4b1b18d16494890ec56f2dc0c11caa692e6"} Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.335051 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc826e523fd08df33e7bced3a073a4b1b18d16494890ec56f2dc0c11caa692e6" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.355002 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-vpsr8" event={"ID":"2db14df8-28f0-42fc-9891-361afd774445","Type":"ContainerDied","Data":"4227cf5762c31ce8cc456ea80f86dc946f33d3edd57161f6a97c5c4be8b1c2b0"} Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.355048 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4227cf5762c31ce8cc456ea80f86dc946f33d3edd57161f6a97c5c4be8b1c2b0" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.364927 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-c281-account-create-update-gjq7l" event={"ID":"24937f1a-fdea-4383-9c39-885ee36af08c","Type":"ContainerDied","Data":"84dc7b5e61381dfc87e8428252bce94283af7fb8eec01c22aa332983bc2b627c"} Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.364964 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84dc7b5e61381dfc87e8428252bce94283af7fb8eec01c22aa332983bc2b627c" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.365026 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-c281-account-create-update-gjq7l" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.387229 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.387644 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bmvjs" event={"ID":"8409e9a3-56e3-49a4-b270-ee8a2493fa75","Type":"ContainerDied","Data":"63019b037a56ffb5e6ebc87d6f1e690473549f6a7d3a30122c2369972b25f52e"} Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.387667 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63019b037a56ffb5e6ebc87d6f1e690473549f6a7d3a30122c2369972b25f52e" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.426689 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.456811 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8409e9a3-56e3-49a4-b270-ee8a2493fa75-operator-scripts\") pod \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.456863 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxmvj\" (UniqueName: \"kubernetes.io/projected/8409e9a3-56e3-49a4-b270-ee8a2493fa75-kube-api-access-pxmvj\") pod \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\" (UID: \"8409e9a3-56e3-49a4-b270-ee8a2493fa75\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.457230 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8409e9a3-56e3-49a4-b270-ee8a2493fa75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8409e9a3-56e3-49a4-b270-ee8a2493fa75" (UID: "8409e9a3-56e3-49a4-b270-ee8a2493fa75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.457497 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.467201 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8409e9a3-56e3-49a4-b270-ee8a2493fa75-kube-api-access-pxmvj" (OuterVolumeSpecName: "kube-api-access-pxmvj") pod "8409e9a3-56e3-49a4-b270-ee8a2493fa75" (UID: "8409e9a3-56e3-49a4-b270-ee8a2493fa75"). InnerVolumeSpecName "kube-api-access-pxmvj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.476392 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.476453 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.499607 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.503204 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:44 crc kubenswrapper[4662]: E1208 09:32:44.537186 4662 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24937f1a_fdea_4383_9c39_885ee36af08c.slice\": RecentStats: unable to find data in memory cache]" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557721 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-operator-scripts\") pod \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557794 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5v7w\" (UniqueName: \"kubernetes.io/projected/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-kube-api-access-j5v7w\") pod \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\" (UID: \"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557821 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433f7180-bb32-4bf5-b1d2-c75388f8011d-operator-scripts\") pod \"433f7180-bb32-4bf5-b1d2-c75388f8011d\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557842 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-scripts\") pod \"d258d8f4-56f3-4ab1-8361-882aa7237b39\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557884 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7sbn\" (UniqueName: \"kubernetes.io/projected/433f7180-bb32-4bf5-b1d2-c75388f8011d-kube-api-access-x7sbn\") pod \"433f7180-bb32-4bf5-b1d2-c75388f8011d\" (UID: \"433f7180-bb32-4bf5-b1d2-c75388f8011d\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557906 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-log-httpd\") pod \"d258d8f4-56f3-4ab1-8361-882aa7237b39\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557927 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2db14df8-28f0-42fc-9891-361afd774445-operator-scripts\") pod \"2db14df8-28f0-42fc-9891-361afd774445\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.557981 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2jfl\" (UniqueName: \"kubernetes.io/projected/2db14df8-28f0-42fc-9891-361afd774445-kube-api-access-v2jfl\") pod \"2db14df8-28f0-42fc-9891-361afd774445\" (UID: \"2db14df8-28f0-42fc-9891-361afd774445\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558012 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hzkcp\" (UniqueName: \"kubernetes.io/projected/d258d8f4-56f3-4ab1-8361-882aa7237b39-kube-api-access-hzkcp\") pod \"d258d8f4-56f3-4ab1-8361-882aa7237b39\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558055 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwzfr\" (UniqueName: \"kubernetes.io/projected/dd4aa2f3-f0db-4855-b972-e077877518c6-kube-api-access-hwzfr\") pod \"dd4aa2f3-f0db-4855-b972-e077877518c6\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558108 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4aa2f3-f0db-4855-b972-e077877518c6-operator-scripts\") pod \"dd4aa2f3-f0db-4855-b972-e077877518c6\" (UID: \"dd4aa2f3-f0db-4855-b972-e077877518c6\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558180 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-sg-core-conf-yaml\") pod \"d258d8f4-56f3-4ab1-8361-882aa7237b39\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558200 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-config-data\") pod \"d258d8f4-56f3-4ab1-8361-882aa7237b39\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558243 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-run-httpd\") pod \"d258d8f4-56f3-4ab1-8361-882aa7237b39\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558311 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-combined-ca-bundle\") pod \"d258d8f4-56f3-4ab1-8361-882aa7237b39\" (UID: \"d258d8f4-56f3-4ab1-8361-882aa7237b39\") " Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558582 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d258d8f4-56f3-4ab1-8361-882aa7237b39" (UID: "d258d8f4-56f3-4ab1-8361-882aa7237b39"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558710 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxmvj\" (UniqueName: \"kubernetes.io/projected/8409e9a3-56e3-49a4-b270-ee8a2493fa75-kube-api-access-pxmvj\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558722 4662 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.558731 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8409e9a3-56e3-49a4-b270-ee8a2493fa75-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.559072 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9" (UID: "c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.560295 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2db14df8-28f0-42fc-9891-361afd774445-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2db14df8-28f0-42fc-9891-361afd774445" (UID: "2db14df8-28f0-42fc-9891-361afd774445"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.561714 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/433f7180-bb32-4bf5-b1d2-c75388f8011d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "433f7180-bb32-4bf5-b1d2-c75388f8011d" (UID: "433f7180-bb32-4bf5-b1d2-c75388f8011d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.562072 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d258d8f4-56f3-4ab1-8361-882aa7237b39" (UID: "d258d8f4-56f3-4ab1-8361-882aa7237b39"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.562555 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd4aa2f3-f0db-4855-b972-e077877518c6-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dd4aa2f3-f0db-4855-b972-e077877518c6" (UID: "dd4aa2f3-f0db-4855-b972-e077877518c6"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.563786 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-kube-api-access-j5v7w" (OuterVolumeSpecName: "kube-api-access-j5v7w") pod "c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9" (UID: "c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9"). InnerVolumeSpecName "kube-api-access-j5v7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.565417 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2db14df8-28f0-42fc-9891-361afd774445-kube-api-access-v2jfl" (OuterVolumeSpecName: "kube-api-access-v2jfl") pod "2db14df8-28f0-42fc-9891-361afd774445" (UID: "2db14df8-28f0-42fc-9891-361afd774445"). InnerVolumeSpecName "kube-api-access-v2jfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.567092 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4aa2f3-f0db-4855-b972-e077877518c6-kube-api-access-hwzfr" (OuterVolumeSpecName: "kube-api-access-hwzfr") pod "dd4aa2f3-f0db-4855-b972-e077877518c6" (UID: "dd4aa2f3-f0db-4855-b972-e077877518c6"). InnerVolumeSpecName "kube-api-access-hwzfr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.567550 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d258d8f4-56f3-4ab1-8361-882aa7237b39" (UID: "d258d8f4-56f3-4ab1-8361-882aa7237b39"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.577128 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-config-data" (OuterVolumeSpecName: "config-data") pod "d258d8f4-56f3-4ab1-8361-882aa7237b39" (UID: "d258d8f4-56f3-4ab1-8361-882aa7237b39"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.577166 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-scripts" (OuterVolumeSpecName: "scripts") pod "d258d8f4-56f3-4ab1-8361-882aa7237b39" (UID: "d258d8f4-56f3-4ab1-8361-882aa7237b39"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.578301 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d258d8f4-56f3-4ab1-8361-882aa7237b39-kube-api-access-hzkcp" (OuterVolumeSpecName: "kube-api-access-hzkcp") pod "d258d8f4-56f3-4ab1-8361-882aa7237b39" (UID: "d258d8f4-56f3-4ab1-8361-882aa7237b39"). InnerVolumeSpecName "kube-api-access-hzkcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.581451 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d258d8f4-56f3-4ab1-8361-882aa7237b39" (UID: "d258d8f4-56f3-4ab1-8361-882aa7237b39"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.585262 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/433f7180-bb32-4bf5-b1d2-c75388f8011d-kube-api-access-x7sbn" (OuterVolumeSpecName: "kube-api-access-x7sbn") pod "433f7180-bb32-4bf5-b1d2-c75388f8011d" (UID: "433f7180-bb32-4bf5-b1d2-c75388f8011d"). InnerVolumeSpecName "kube-api-access-x7sbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660103 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660151 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660164 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5v7w\" (UniqueName: \"kubernetes.io/projected/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9-kube-api-access-j5v7w\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660177 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/433f7180-bb32-4bf5-b1d2-c75388f8011d-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660186 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660194 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7sbn\" (UniqueName: \"kubernetes.io/projected/433f7180-bb32-4bf5-b1d2-c75388f8011d-kube-api-access-x7sbn\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660202 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2db14df8-28f0-42fc-9891-361afd774445-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660210 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2jfl\" (UniqueName: \"kubernetes.io/projected/2db14df8-28f0-42fc-9891-361afd774445-kube-api-access-v2jfl\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660218 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hzkcp\" (UniqueName: \"kubernetes.io/projected/d258d8f4-56f3-4ab1-8361-882aa7237b39-kube-api-access-hzkcp\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660226 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwzfr\" (UniqueName: \"kubernetes.io/projected/dd4aa2f3-f0db-4855-b972-e077877518c6-kube-api-access-hwzfr\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660235 4662 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dd4aa2f3-f0db-4855-b972-e077877518c6-operator-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660243 4662 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660251 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d258d8f4-56f3-4ab1-8361-882aa7237b39-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.660260 4662 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d258d8f4-56f3-4ab1-8361-882aa7237b39-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:44 crc kubenswrapper[4662]: I1208 09:32:44.707594 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae70464a-32e1-41cc-b173-85019c99d562" path="/var/lib/kubelet/pods/ae70464a-32e1-41cc-b173-85019c99d562/volumes" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.426228 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-f69b-account-create-update-q4s95" event={"ID":"c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9","Type":"ContainerDied","Data":"85cb657e9b0452c0c3ecce2f9c8bca1c48b3bb18c898d84cc1427e8af41c6051"} Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.426283 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85cb657e9b0452c0c3ecce2f9c8bca1c48b3bb18c898d84cc1427e8af41c6051" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.426382 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-f69b-account-create-update-q4s95" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.429532 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-d829v" event={"ID":"dd4aa2f3-f0db-4855-b972-e077877518c6","Type":"ContainerDied","Data":"73a46b94dd2f9bd010ec77376422089da1461f358d05250cacb9ba4a77d1a926"} Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.429554 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73a46b94dd2f9bd010ec77376422089da1461f358d05250cacb9ba4a77d1a926" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.429616 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-d829v" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.434667 4662 generic.go:334] "Generic (PLEG): container finished" podID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerID="de6bdd347068e8a7c063bb7a8d072308fc12f21e8ae059d4425c30046b69414c" exitCode=137 Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.434832 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.435623 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a4d70cc0-8e18-42c6-842b-25ba8229f4e2","Type":"ContainerDied","Data":"de6bdd347068e8a7c063bb7a8d072308fc12f21e8ae059d4425c30046b69414c"} Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.435655 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a4d70cc0-8e18-42c6-842b-25ba8229f4e2","Type":"ContainerDied","Data":"b9f6552a33ec02def9e7816806db6a44fdb04e0946f85aa0c04f1f597899827d"} Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.435669 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9f6552a33ec02def9e7816806db6a44fdb04e0946f85aa0c04f1f597899827d" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.435887 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bmvjs" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.436423 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-vpsr8" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.437157 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-0230-account-create-update-mqp6s" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.494660 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.582767 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-combined-ca-bundle\") pod \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.582807 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-etc-machine-id\") pod \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.582831 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data\") pod \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.582851 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl46z\" (UniqueName: \"kubernetes.io/projected/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-kube-api-access-gl46z\") pod \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.582924 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data-custom\") pod \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.582940 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-logs\") pod \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.583304 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-scripts\") pod \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\" (UID: \"a4d70cc0-8e18-42c6-842b-25ba8229f4e2\") " Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.594266 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a4d70cc0-8e18-42c6-842b-25ba8229f4e2" (UID: "a4d70cc0-8e18-42c6-842b-25ba8229f4e2"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.595144 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-scripts" (OuterVolumeSpecName: "scripts") pod "a4d70cc0-8e18-42c6-842b-25ba8229f4e2" (UID: "a4d70cc0-8e18-42c6-842b-25ba8229f4e2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.595910 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-logs" (OuterVolumeSpecName: "logs") pod "a4d70cc0-8e18-42c6-842b-25ba8229f4e2" (UID: "a4d70cc0-8e18-42c6-842b-25ba8229f4e2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.602610 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-kube-api-access-gl46z" (OuterVolumeSpecName: "kube-api-access-gl46z") pod "a4d70cc0-8e18-42c6-842b-25ba8229f4e2" (UID: "a4d70cc0-8e18-42c6-842b-25ba8229f4e2"). InnerVolumeSpecName "kube-api-access-gl46z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.643363 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a4d70cc0-8e18-42c6-842b-25ba8229f4e2" (UID: "a4d70cc0-8e18-42c6-842b-25ba8229f4e2"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.672131 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4d70cc0-8e18-42c6-842b-25ba8229f4e2" (UID: "a4d70cc0-8e18-42c6-842b-25ba8229f4e2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.681822 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.687044 4662 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data-custom\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.687557 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.687630 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.687682 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.687771 4662 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-etc-machine-id\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.687832 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gl46z\" (UniqueName: \"kubernetes.io/projected/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-kube-api-access-gl46z\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.695273 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.711207 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.711847 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.711919 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api" Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.712008 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="433f7180-bb32-4bf5-b1d2-c75388f8011d" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712064 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="433f7180-bb32-4bf5-b1d2-c75388f8011d" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.712116 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2db14df8-28f0-42fc-9891-361afd774445" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712167 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2db14df8-28f0-42fc-9891-361afd774445" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.712234 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd4aa2f3-f0db-4855-b972-e077877518c6" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712287 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4aa2f3-f0db-4855-b972-e077877518c6" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.712334 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24937f1a-fdea-4383-9c39-885ee36af08c" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712385 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="24937f1a-fdea-4383-9c39-885ee36af08c" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.712439 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api-log" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712489 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api-log" Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.712551 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712599 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: E1208 09:32:45.712659 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8409e9a3-56e3-49a4-b270-ee8a2493fa75" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712712 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="8409e9a3-56e3-49a4-b270-ee8a2493fa75" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712931 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api-log" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.712993 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="24937f1a-fdea-4383-9c39-885ee36af08c" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.713221 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="8409e9a3-56e3-49a4-b270-ee8a2493fa75" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.713294 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.713355 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.713417 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2db14df8-28f0-42fc-9891-361afd774445" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.713582 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd4aa2f3-f0db-4855-b972-e077877518c6" containerName="mariadb-database-create" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.713645 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="433f7180-bb32-4bf5-b1d2-c75388f8011d" containerName="mariadb-account-create-update" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.720700 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.721177 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.724213 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.733111 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.740819 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data" (OuterVolumeSpecName: "config-data") pod "a4d70cc0-8e18-42c6-842b-25ba8229f4e2" (UID: "a4d70cc0-8e18-42c6-842b-25ba8229f4e2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789356 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-log-httpd\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789427 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-run-httpd\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789486 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789510 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c98c\" (UniqueName: \"kubernetes.io/projected/7cfb8924-7833-4a4f-ae89-d179f1545607-kube-api-access-7c98c\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789536 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-config-data\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789560 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789593 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-scripts\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.789691 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4d70cc0-8e18-42c6-842b-25ba8229f4e2-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.891433 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-run-httpd\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.891939 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-run-httpd\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.892073 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7c98c\" (UniqueName: \"kubernetes.io/projected/7cfb8924-7833-4a4f-ae89-d179f1545607-kube-api-access-7c98c\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.892126 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.892976 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-config-data\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.893046 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.893143 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-scripts\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.893298 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-log-httpd\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.893599 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-log-httpd\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.896839 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-config-data\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.897435 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-scripts\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.897824 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.898573 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:45 crc kubenswrapper[4662]: I1208 09:32:45.911900 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7c98c\" (UniqueName: \"kubernetes.io/projected/7cfb8924-7833-4a4f-ae89-d179f1545607-kube-api-access-7c98c\") pod \"ceilometer-0\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " pod="openstack/ceilometer-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.056541 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.441313 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.478609 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.486118 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.498435 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.500169 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.502295 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.502497 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.502606 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.517245 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.587940 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612127 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612213 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-config-data\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612547 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-scripts\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612647 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77f9af46-d962-49bc-96cb-5740adc30c48-etc-machine-id\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612677 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-config-data-custom\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612701 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-public-tls-certs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612726 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f9af46-d962-49bc-96cb-5740adc30c48-logs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612795 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5595v\" (UniqueName: \"kubernetes.io/projected/77f9af46-d962-49bc-96cb-5740adc30c48-kube-api-access-5595v\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.612872 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.709307 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" path="/var/lib/kubelet/pods/a4d70cc0-8e18-42c6-842b-25ba8229f4e2/volumes" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.710192 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d258d8f4-56f3-4ab1-8361-882aa7237b39" path="/var/lib/kubelet/pods/d258d8f4-56f3-4ab1-8361-882aa7237b39/volumes" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714250 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77f9af46-d962-49bc-96cb-5740adc30c48-etc-machine-id\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714302 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-config-data-custom\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714330 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-public-tls-certs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714355 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f9af46-d962-49bc-96cb-5740adc30c48-logs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714380 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5595v\" (UniqueName: \"kubernetes.io/projected/77f9af46-d962-49bc-96cb-5740adc30c48-kube-api-access-5595v\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714429 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714460 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714515 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-config-data\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.714614 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-scripts\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.716243 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77f9af46-d962-49bc-96cb-5740adc30c48-logs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.716320 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/77f9af46-d962-49bc-96cb-5740adc30c48-etc-machine-id\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.722315 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-scripts\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.722987 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-config-data\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.724360 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-config-data-custom\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.731041 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.741566 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.747395 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/77f9af46-d962-49bc-96cb-5740adc30c48-public-tls-certs\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.750263 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5595v\" (UniqueName: \"kubernetes.io/projected/77f9af46-d962-49bc-96cb-5740adc30c48-kube-api-access-5595v\") pod \"cinder-api-0\" (UID: \"77f9af46-d962-49bc-96cb-5740adc30c48\") " pod="openstack/cinder-api-0" Dec 08 09:32:46 crc kubenswrapper[4662]: I1208 09:32:46.816546 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Dec 08 09:32:47 crc kubenswrapper[4662]: I1208 09:32:47.392562 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Dec 08 09:32:47 crc kubenswrapper[4662]: I1208 09:32:47.476709 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"77f9af46-d962-49bc-96cb-5740adc30c48","Type":"ContainerStarted","Data":"60187ff3b37becc1df65fad57317c9bf803a136084cde1b43f1f90657bed489a"} Dec 08 09:32:47 crc kubenswrapper[4662]: I1208 09:32:47.487779 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerStarted","Data":"2015a882015a69e94e89f53f44e8ce8a57306afe45eb79e740f2d1c2459c372d"} Dec 08 09:32:47 crc kubenswrapper[4662]: I1208 09:32:47.487850 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerStarted","Data":"9ac917ee54d43e3eb5af2a6e281ce32fe4a8596554f714d6dc4fb1ac8c7a18b2"} Dec 08 09:32:48 crc kubenswrapper[4662]: I1208 09:32:48.510634 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"77f9af46-d962-49bc-96cb-5740adc30c48","Type":"ContainerStarted","Data":"6f6ec5f4a36f0244182c9a30c13bd29fdb0a7f63a870daddaddc8b5aeee7a570"} Dec 08 09:32:48 crc kubenswrapper[4662]: I1208 09:32:48.514392 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerStarted","Data":"64277893473a833c504ca2fa3015b007f645216f5b6a4bcc1bca3ed509a8fdc8"} Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.440123 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q7bbm"] Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.441944 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.447607 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.447676 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.447860 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m4tl5" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.453805 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q7bbm"] Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.524156 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"77f9af46-d962-49bc-96cb-5740adc30c48","Type":"ContainerStarted","Data":"65b7ed2996df4135fdc37c820e7a0a82604180c91b37b832e1a20d00df0f0f3b"} Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.524666 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.526380 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerStarted","Data":"f092a2332404590163bc851bb34cb79477b6ce58fc3e81f40050b582d13f1d97"} Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.547675 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.547655573 podStartE2EDuration="3.547655573s" podCreationTimestamp="2025-12-08 09:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:32:49.542851034 +0000 UTC m=+1093.111879034" watchObservedRunningTime="2025-12-08 09:32:49.547655573 +0000 UTC m=+1093.116683563" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.564735 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-scripts\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.564822 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.564842 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5779\" (UniqueName: \"kubernetes.io/projected/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-kube-api-access-z5779\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.564900 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-config-data\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.666890 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-scripts\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.666965 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.666985 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5779\" (UniqueName: \"kubernetes.io/projected/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-kube-api-access-z5779\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.667046 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-config-data\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.672487 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-scripts\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.672573 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.672584 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-config-data\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.695058 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5779\" (UniqueName: \"kubernetes.io/projected/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-kube-api-access-z5779\") pod \"nova-cell0-conductor-db-sync-q7bbm\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:49 crc kubenswrapper[4662]: I1208 09:32:49.759331 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:32:50 crc kubenswrapper[4662]: I1208 09:32:50.310271 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q7bbm"] Dec 08 09:32:50 crc kubenswrapper[4662]: W1208 09:32:50.314448 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode93e89c3_cd7d_4205_9741_ac087d1d7bd6.slice/crio-74abb9c74f7d6138df7b9df232e85863f1892d4661ed79f9bcf376e3b928ab83 WatchSource:0}: Error finding container 74abb9c74f7d6138df7b9df232e85863f1892d4661ed79f9bcf376e3b928ab83: Status 404 returned error can't find the container with id 74abb9c74f7d6138df7b9df232e85863f1892d4661ed79f9bcf376e3b928ab83 Dec 08 09:32:50 crc kubenswrapper[4662]: I1208 09:32:50.390499 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="a4d70cc0-8e18-42c6-842b-25ba8229f4e2" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.153:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:32:50 crc kubenswrapper[4662]: I1208 09:32:50.557777 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerStarted","Data":"d9b264462e7d281ff8acc4cb86d578f15d570701eab1626706f091fc637c9378"} Dec 08 09:32:50 crc kubenswrapper[4662]: I1208 09:32:50.560892 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" event={"ID":"e93e89c3-cd7d-4205-9741-ac087d1d7bd6","Type":"ContainerStarted","Data":"74abb9c74f7d6138df7b9df232e85863f1892d4661ed79f9bcf376e3b928ab83"} Dec 08 09:32:50 crc kubenswrapper[4662]: I1208 09:32:50.601694 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.429667136 podStartE2EDuration="5.601653907s" podCreationTimestamp="2025-12-08 09:32:45 +0000 UTC" firstStartedPulling="2025-12-08 09:32:46.583481857 +0000 UTC m=+1090.152509847" lastFinishedPulling="2025-12-08 09:32:49.755468628 +0000 UTC m=+1093.324496618" observedRunningTime="2025-12-08 09:32:50.593400536 +0000 UTC m=+1094.162428526" watchObservedRunningTime="2025-12-08 09:32:50.601653907 +0000 UTC m=+1094.170681897" Dec 08 09:32:51 crc kubenswrapper[4662]: I1208 09:32:51.568538 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 08 09:32:56 crc kubenswrapper[4662]: I1208 09:32:56.760064 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:32:56 crc kubenswrapper[4662]: I1208 09:32:56.761019 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-central-agent" containerID="cri-o://2015a882015a69e94e89f53f44e8ce8a57306afe45eb79e740f2d1c2459c372d" gracePeriod=30 Dec 08 09:32:56 crc kubenswrapper[4662]: I1208 09:32:56.761237 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="proxy-httpd" containerID="cri-o://d9b264462e7d281ff8acc4cb86d578f15d570701eab1626706f091fc637c9378" gracePeriod=30 Dec 08 09:32:56 crc kubenswrapper[4662]: I1208 09:32:56.761292 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="sg-core" containerID="cri-o://f092a2332404590163bc851bb34cb79477b6ce58fc3e81f40050b582d13f1d97" gracePeriod=30 Dec 08 09:32:56 crc kubenswrapper[4662]: I1208 09:32:56.761332 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-notification-agent" containerID="cri-o://64277893473a833c504ca2fa3015b007f645216f5b6a4bcc1bca3ed509a8fdc8" gracePeriod=30 Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628001 4662 generic.go:334] "Generic (PLEG): container finished" podID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerID="d9b264462e7d281ff8acc4cb86d578f15d570701eab1626706f091fc637c9378" exitCode=0 Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628038 4662 generic.go:334] "Generic (PLEG): container finished" podID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerID="f092a2332404590163bc851bb34cb79477b6ce58fc3e81f40050b582d13f1d97" exitCode=2 Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628051 4662 generic.go:334] "Generic (PLEG): container finished" podID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerID="64277893473a833c504ca2fa3015b007f645216f5b6a4bcc1bca3ed509a8fdc8" exitCode=0 Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628062 4662 generic.go:334] "Generic (PLEG): container finished" podID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerID="2015a882015a69e94e89f53f44e8ce8a57306afe45eb79e740f2d1c2459c372d" exitCode=0 Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628063 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerDied","Data":"d9b264462e7d281ff8acc4cb86d578f15d570701eab1626706f091fc637c9378"} Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628106 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerDied","Data":"f092a2332404590163bc851bb34cb79477b6ce58fc3e81f40050b582d13f1d97"} Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628116 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerDied","Data":"64277893473a833c504ca2fa3015b007f645216f5b6a4bcc1bca3ed509a8fdc8"} Dec 08 09:32:57 crc kubenswrapper[4662]: I1208 09:32:57.628125 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerDied","Data":"2015a882015a69e94e89f53f44e8ce8a57306afe45eb79e740f2d1c2459c372d"} Dec 08 09:32:59 crc kubenswrapper[4662]: I1208 09:32:59.058614 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Dec 08 09:33:02 crc kubenswrapper[4662]: I1208 09:33:02.611296 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:33:02 crc kubenswrapper[4662]: I1208 09:33:02.612630 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:33:03 crc kubenswrapper[4662]: E1208 09:33:03.236794 4662 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Dec 08 09:33:03 crc kubenswrapper[4662]: E1208 09:33:03.237802 4662 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z5779,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-q7bbm_openstack(e93e89c3-cd7d-4205-9741-ac087d1d7bd6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Dec 08 09:33:03 crc kubenswrapper[4662]: E1208 09:33:03.239100 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" podUID="e93e89c3-cd7d-4205-9741-ac087d1d7bd6" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.544846 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.667094 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-config-data\") pod \"7cfb8924-7833-4a4f-ae89-d179f1545607\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.667177 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-combined-ca-bundle\") pod \"7cfb8924-7833-4a4f-ae89-d179f1545607\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.667218 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-log-httpd\") pod \"7cfb8924-7833-4a4f-ae89-d179f1545607\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.667288 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-sg-core-conf-yaml\") pod \"7cfb8924-7833-4a4f-ae89-d179f1545607\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.667354 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-scripts\") pod \"7cfb8924-7833-4a4f-ae89-d179f1545607\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.667456 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-run-httpd\") pod \"7cfb8924-7833-4a4f-ae89-d179f1545607\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.667505 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c98c\" (UniqueName: \"kubernetes.io/projected/7cfb8924-7833-4a4f-ae89-d179f1545607-kube-api-access-7c98c\") pod \"7cfb8924-7833-4a4f-ae89-d179f1545607\" (UID: \"7cfb8924-7833-4a4f-ae89-d179f1545607\") " Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.668126 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "7cfb8924-7833-4a4f-ae89-d179f1545607" (UID: "7cfb8924-7833-4a4f-ae89-d179f1545607"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.668804 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "7cfb8924-7833-4a4f-ae89-d179f1545607" (UID: "7cfb8924-7833-4a4f-ae89-d179f1545607"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.684038 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-scripts" (OuterVolumeSpecName: "scripts") pod "7cfb8924-7833-4a4f-ae89-d179f1545607" (UID: "7cfb8924-7833-4a4f-ae89-d179f1545607"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.685815 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cfb8924-7833-4a4f-ae89-d179f1545607-kube-api-access-7c98c" (OuterVolumeSpecName: "kube-api-access-7c98c") pod "7cfb8924-7833-4a4f-ae89-d179f1545607" (UID: "7cfb8924-7833-4a4f-ae89-d179f1545607"). InnerVolumeSpecName "kube-api-access-7c98c". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.695665 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.695832 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"7cfb8924-7833-4a4f-ae89-d179f1545607","Type":"ContainerDied","Data":"9ac917ee54d43e3eb5af2a6e281ce32fe4a8596554f714d6dc4fb1ac8c7a18b2"} Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.695905 4662 scope.go:117] "RemoveContainer" containerID="d9b264462e7d281ff8acc4cb86d578f15d570701eab1626706f091fc637c9378" Dec 08 09:33:03 crc kubenswrapper[4662]: E1208 09:33:03.696655 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" podUID="e93e89c3-cd7d-4205-9741-ac087d1d7bd6" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.703107 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "7cfb8924-7833-4a4f-ae89-d179f1545607" (UID: "7cfb8924-7833-4a4f-ae89-d179f1545607"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.732947 4662 scope.go:117] "RemoveContainer" containerID="f092a2332404590163bc851bb34cb79477b6ce58fc3e81f40050b582d13f1d97" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.736675 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cfb8924-7833-4a4f-ae89-d179f1545607" (UID: "7cfb8924-7833-4a4f-ae89-d179f1545607"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.750677 4662 scope.go:117] "RemoveContainer" containerID="64277893473a833c504ca2fa3015b007f645216f5b6a4bcc1bca3ed509a8fdc8" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.767804 4662 scope.go:117] "RemoveContainer" containerID="2015a882015a69e94e89f53f44e8ce8a57306afe45eb79e740f2d1c2459c372d" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.770240 4662 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.770278 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.770288 4662 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.770297 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c98c\" (UniqueName: \"kubernetes.io/projected/7cfb8924-7833-4a4f-ae89-d179f1545607-kube-api-access-7c98c\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.770307 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.770315 4662 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/7cfb8924-7833-4a4f-ae89-d179f1545607-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.801337 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-config-data" (OuterVolumeSpecName: "config-data") pod "7cfb8924-7833-4a4f-ae89-d179f1545607" (UID: "7cfb8924-7833-4a4f-ae89-d179f1545607"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:03 crc kubenswrapper[4662]: I1208 09:33:03.872520 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cfb8924-7833-4a4f-ae89-d179f1545607-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.029647 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.039292 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.054901 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:04 crc kubenswrapper[4662]: E1208 09:33:04.055591 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-central-agent" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.055701 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-central-agent" Dec 08 09:33:04 crc kubenswrapper[4662]: E1208 09:33:04.055808 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-notification-agent" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.055873 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-notification-agent" Dec 08 09:33:04 crc kubenswrapper[4662]: E1208 09:33:04.055953 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="proxy-httpd" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.056004 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="proxy-httpd" Dec 08 09:33:04 crc kubenswrapper[4662]: E1208 09:33:04.056073 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="sg-core" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.060542 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="sg-core" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.061210 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-notification-agent" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.061356 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="ceilometer-central-agent" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.061464 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="proxy-httpd" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.061547 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" containerName="sg-core" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.063769 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.066429 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.066537 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.083126 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.176621 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-log-httpd\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.176675 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-run-httpd\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.176758 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.176775 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-scripts\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.176793 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.176810 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-config-data\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.176859 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nhrn\" (UniqueName: \"kubernetes.io/projected/407e9986-72b0-4507-bbad-2c1a424838fc-kube-api-access-5nhrn\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.278441 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-run-httpd\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.278674 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.278733 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-scripts\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.278824 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.278869 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-config-data\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.279013 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nhrn\" (UniqueName: \"kubernetes.io/projected/407e9986-72b0-4507-bbad-2c1a424838fc-kube-api-access-5nhrn\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.279131 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-log-httpd\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.279500 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-run-httpd\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.281300 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-log-httpd\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.283516 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.289302 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.291003 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-config-data\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.293625 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-scripts\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.300686 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nhrn\" (UniqueName: \"kubernetes.io/projected/407e9986-72b0-4507-bbad-2c1a424838fc-kube-api-access-5nhrn\") pod \"ceilometer-0\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.387969 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.710551 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cfb8924-7833-4a4f-ae89-d179f1545607" path="/var/lib/kubelet/pods/7cfb8924-7833-4a4f-ae89-d179f1545607/volumes" Dec 08 09:33:04 crc kubenswrapper[4662]: I1208 09:33:04.824844 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:05 crc kubenswrapper[4662]: I1208 09:33:05.718767 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerStarted","Data":"f2527f285a272cc798904fe78014544c79cf6638b4c34684d86b972c81d85033"} Dec 08 09:33:05 crc kubenswrapper[4662]: I1208 09:33:05.719329 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerStarted","Data":"d51af51bcfeb984b7bf61c0721b87534480375ea433507733c2c3a8891cd5907"} Dec 08 09:33:08 crc kubenswrapper[4662]: I1208 09:33:08.744821 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerStarted","Data":"4a929e9e435921eab430eb90408d8b09af2b70a8867855dffe1d57e532bfda8e"} Dec 08 09:33:08 crc kubenswrapper[4662]: I1208 09:33:08.745371 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerStarted","Data":"df5ca0f881c2f983bb986ffc27d27858bb0b8ce257da6e777394424de44a1e37"} Dec 08 09:33:09 crc kubenswrapper[4662]: I1208 09:33:09.283490 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:10 crc kubenswrapper[4662]: I1208 09:33:10.766953 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerStarted","Data":"11bb3e605b9877522cb0bc542f189f66aa24ee5188434b2ff557f4044d2e0d38"} Dec 08 09:33:10 crc kubenswrapper[4662]: I1208 09:33:10.767089 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-central-agent" containerID="cri-o://f2527f285a272cc798904fe78014544c79cf6638b4c34684d86b972c81d85033" gracePeriod=30 Dec 08 09:33:10 crc kubenswrapper[4662]: I1208 09:33:10.767554 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="proxy-httpd" containerID="cri-o://11bb3e605b9877522cb0bc542f189f66aa24ee5188434b2ff557f4044d2e0d38" gracePeriod=30 Dec 08 09:33:10 crc kubenswrapper[4662]: I1208 09:33:10.767596 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="sg-core" containerID="cri-o://4a929e9e435921eab430eb90408d8b09af2b70a8867855dffe1d57e532bfda8e" gracePeriod=30 Dec 08 09:33:10 crc kubenswrapper[4662]: I1208 09:33:10.767642 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 08 09:33:10 crc kubenswrapper[4662]: I1208 09:33:10.767643 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-notification-agent" containerID="cri-o://df5ca0f881c2f983bb986ffc27d27858bb0b8ce257da6e777394424de44a1e37" gracePeriod=30 Dec 08 09:33:10 crc kubenswrapper[4662]: I1208 09:33:10.804275 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.28689183 podStartE2EDuration="6.804251561s" podCreationTimestamp="2025-12-08 09:33:04 +0000 UTC" firstStartedPulling="2025-12-08 09:33:04.837180893 +0000 UTC m=+1108.406208883" lastFinishedPulling="2025-12-08 09:33:10.354540624 +0000 UTC m=+1113.923568614" observedRunningTime="2025-12-08 09:33:10.788056889 +0000 UTC m=+1114.357084889" watchObservedRunningTime="2025-12-08 09:33:10.804251561 +0000 UTC m=+1114.373279561" Dec 08 09:33:11 crc kubenswrapper[4662]: I1208 09:33:11.777002 4662 generic.go:334] "Generic (PLEG): container finished" podID="407e9986-72b0-4507-bbad-2c1a424838fc" containerID="4a929e9e435921eab430eb90408d8b09af2b70a8867855dffe1d57e532bfda8e" exitCode=2 Dec 08 09:33:11 crc kubenswrapper[4662]: I1208 09:33:11.777302 4662 generic.go:334] "Generic (PLEG): container finished" podID="407e9986-72b0-4507-bbad-2c1a424838fc" containerID="df5ca0f881c2f983bb986ffc27d27858bb0b8ce257da6e777394424de44a1e37" exitCode=0 Dec 08 09:33:11 crc kubenswrapper[4662]: I1208 09:33:11.777061 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerDied","Data":"4a929e9e435921eab430eb90408d8b09af2b70a8867855dffe1d57e532bfda8e"} Dec 08 09:33:11 crc kubenswrapper[4662]: I1208 09:33:11.777337 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerDied","Data":"df5ca0f881c2f983bb986ffc27d27858bb0b8ce257da6e777394424de44a1e37"} Dec 08 09:33:12 crc kubenswrapper[4662]: I1208 09:33:12.789107 4662 generic.go:334] "Generic (PLEG): container finished" podID="407e9986-72b0-4507-bbad-2c1a424838fc" containerID="f2527f285a272cc798904fe78014544c79cf6638b4c34684d86b972c81d85033" exitCode=0 Dec 08 09:33:12 crc kubenswrapper[4662]: I1208 09:33:12.789159 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerDied","Data":"f2527f285a272cc798904fe78014544c79cf6638b4c34684d86b972c81d85033"} Dec 08 09:33:17 crc kubenswrapper[4662]: I1208 09:33:17.836681 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" event={"ID":"e93e89c3-cd7d-4205-9741-ac087d1d7bd6","Type":"ContainerStarted","Data":"996f2cc86adc17feaa6089916c9026b603933b3ab7cdcbb78ec4872fce420ed8"} Dec 08 09:33:17 crc kubenswrapper[4662]: I1208 09:33:17.861979 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" podStartSLOduration=1.945660192 podStartE2EDuration="28.861957852s" podCreationTimestamp="2025-12-08 09:32:49 +0000 UTC" firstStartedPulling="2025-12-08 09:32:50.316639005 +0000 UTC m=+1093.885666995" lastFinishedPulling="2025-12-08 09:33:17.232936665 +0000 UTC m=+1120.801964655" observedRunningTime="2025-12-08 09:33:17.861056898 +0000 UTC m=+1121.430084908" watchObservedRunningTime="2025-12-08 09:33:17.861957852 +0000 UTC m=+1121.430985842" Dec 08 09:33:27 crc kubenswrapper[4662]: I1208 09:33:27.942167 4662 generic.go:334] "Generic (PLEG): container finished" podID="e93e89c3-cd7d-4205-9741-ac087d1d7bd6" containerID="996f2cc86adc17feaa6089916c9026b603933b3ab7cdcbb78ec4872fce420ed8" exitCode=0 Dec 08 09:33:27 crc kubenswrapper[4662]: I1208 09:33:27.942255 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" event={"ID":"e93e89c3-cd7d-4205-9741-ac087d1d7bd6","Type":"ContainerDied","Data":"996f2cc86adc17feaa6089916c9026b603933b3ab7cdcbb78ec4872fce420ed8"} Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.345489 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.460754 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-combined-ca-bundle\") pod \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.460881 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-scripts\") pod \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.461073 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5779\" (UniqueName: \"kubernetes.io/projected/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-kube-api-access-z5779\") pod \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.461202 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-config-data\") pod \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\" (UID: \"e93e89c3-cd7d-4205-9741-ac087d1d7bd6\") " Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.468935 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-scripts" (OuterVolumeSpecName: "scripts") pod "e93e89c3-cd7d-4205-9741-ac087d1d7bd6" (UID: "e93e89c3-cd7d-4205-9741-ac087d1d7bd6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.468935 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-kube-api-access-z5779" (OuterVolumeSpecName: "kube-api-access-z5779") pod "e93e89c3-cd7d-4205-9741-ac087d1d7bd6" (UID: "e93e89c3-cd7d-4205-9741-ac087d1d7bd6"). InnerVolumeSpecName "kube-api-access-z5779". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.493454 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e93e89c3-cd7d-4205-9741-ac087d1d7bd6" (UID: "e93e89c3-cd7d-4205-9741-ac087d1d7bd6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.498615 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-config-data" (OuterVolumeSpecName: "config-data") pod "e93e89c3-cd7d-4205-9741-ac087d1d7bd6" (UID: "e93e89c3-cd7d-4205-9741-ac087d1d7bd6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.563378 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.563405 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.563414 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5779\" (UniqueName: \"kubernetes.io/projected/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-kube-api-access-z5779\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.563424 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e93e89c3-cd7d-4205-9741-ac087d1d7bd6-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.959558 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" event={"ID":"e93e89c3-cd7d-4205-9741-ac087d1d7bd6","Type":"ContainerDied","Data":"74abb9c74f7d6138df7b9df232e85863f1892d4661ed79f9bcf376e3b928ab83"} Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.959882 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74abb9c74f7d6138df7b9df232e85863f1892d4661ed79f9bcf376e3b928ab83" Dec 08 09:33:29 crc kubenswrapper[4662]: I1208 09:33:29.959933 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-q7bbm" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.068554 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 08 09:33:30 crc kubenswrapper[4662]: E1208 09:33:30.068968 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e93e89c3-cd7d-4205-9741-ac087d1d7bd6" containerName="nova-cell0-conductor-db-sync" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.068988 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="e93e89c3-cd7d-4205-9741-ac087d1d7bd6" containerName="nova-cell0-conductor-db-sync" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.069167 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="e93e89c3-cd7d-4205-9741-ac087d1d7bd6" containerName="nova-cell0-conductor-db-sync" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.069719 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.071510 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.079991 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.080978 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-m4tl5" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.175667 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56181caf-3a5f-49f9-8041-05084a240a3a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.175733 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56181caf-3a5f-49f9-8041-05084a240a3a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.175832 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gjgm\" (UniqueName: \"kubernetes.io/projected/56181caf-3a5f-49f9-8041-05084a240a3a-kube-api-access-8gjgm\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.277824 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gjgm\" (UniqueName: \"kubernetes.io/projected/56181caf-3a5f-49f9-8041-05084a240a3a-kube-api-access-8gjgm\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.277944 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56181caf-3a5f-49f9-8041-05084a240a3a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.277981 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56181caf-3a5f-49f9-8041-05084a240a3a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.287666 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/56181caf-3a5f-49f9-8041-05084a240a3a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.291992 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/56181caf-3a5f-49f9-8041-05084a240a3a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.296814 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gjgm\" (UniqueName: \"kubernetes.io/projected/56181caf-3a5f-49f9-8041-05084a240a3a-kube-api-access-8gjgm\") pod \"nova-cell0-conductor-0\" (UID: \"56181caf-3a5f-49f9-8041-05084a240a3a\") " pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.389276 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.842346 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Dec 08 09:33:30 crc kubenswrapper[4662]: I1208 09:33:30.972966 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"56181caf-3a5f-49f9-8041-05084a240a3a","Type":"ContainerStarted","Data":"c8a93e7a4aee418de7aa02e8d86524f4d6017a00e6eea3c9906fd4f762ec7c42"} Dec 08 09:33:31 crc kubenswrapper[4662]: I1208 09:33:31.983600 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"56181caf-3a5f-49f9-8041-05084a240a3a","Type":"ContainerStarted","Data":"7ada1fad6bbc87917ab03cabc1dc929321a902af9120bdf88292a05465b0caee"} Dec 08 09:33:31 crc kubenswrapper[4662]: I1208 09:33:31.983872 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.004881 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.004865375 podStartE2EDuration="2.004865375s" podCreationTimestamp="2025-12-08 09:33:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:33:32.002317937 +0000 UTC m=+1135.571345947" watchObservedRunningTime="2025-12-08 09:33:32.004865375 +0000 UTC m=+1135.573893365" Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.611396 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.611490 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.611559 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.612503 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1a38cb2745e5efa64e1f916b4601a82b9010af6b77998742afd727eba45f786c"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.612615 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://1a38cb2745e5efa64e1f916b4601a82b9010af6b77998742afd727eba45f786c" gracePeriod=600 Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.996312 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="1a38cb2745e5efa64e1f916b4601a82b9010af6b77998742afd727eba45f786c" exitCode=0 Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.996385 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"1a38cb2745e5efa64e1f916b4601a82b9010af6b77998742afd727eba45f786c"} Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.996597 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"2e9e9e908f5c82fe5008d57bd1d536ac24016344c62104f9a96f0f2c0d9a74ee"} Dec 08 09:33:32 crc kubenswrapper[4662]: I1208 09:33:32.996621 4662 scope.go:117] "RemoveContainer" containerID="d04de701f63b5c2e4111b66668ec4560be524ad9596aef41adf5fe2ab05b3e40" Dec 08 09:33:34 crc kubenswrapper[4662]: I1208 09:33:34.392561 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Dec 08 09:33:40 crc kubenswrapper[4662]: I1208 09:33:40.432472 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Dec 08 09:33:40 crc kubenswrapper[4662]: I1208 09:33:40.907176 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-p72s9"] Dec 08 09:33:40 crc kubenswrapper[4662]: I1208 09:33:40.908232 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:40 crc kubenswrapper[4662]: I1208 09:33:40.910420 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Dec 08 09:33:40 crc kubenswrapper[4662]: I1208 09:33:40.911159 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Dec 08 09:33:40 crc kubenswrapper[4662]: I1208 09:33:40.933073 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p72s9"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.056875 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-scripts\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.056966 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-config-data\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.056990 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.057086 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t2rg\" (UniqueName: \"kubernetes.io/projected/648152ca-1c66-4843-ad6d-20450aa26819-kube-api-access-6t2rg\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.084438 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.085821 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.089280 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.104220 4662 generic.go:334] "Generic (PLEG): container finished" podID="407e9986-72b0-4507-bbad-2c1a424838fc" containerID="11bb3e605b9877522cb0bc542f189f66aa24ee5188434b2ff557f4044d2e0d38" exitCode=137 Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.104263 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerDied","Data":"11bb3e605b9877522cb0bc542f189f66aa24ee5188434b2ff557f4044d2e0d38"} Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.111054 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.158656 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5dw5\" (UniqueName: \"kubernetes.io/projected/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-kube-api-access-z5dw5\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.158727 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-scripts\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.158784 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.158823 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-config-data\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.158840 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.158939 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-config-data\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.159090 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t2rg\" (UniqueName: \"kubernetes.io/projected/648152ca-1c66-4843-ad6d-20450aa26819-kube-api-access-6t2rg\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.175911 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.184558 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-scripts\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.185923 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-config-data\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.195145 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t2rg\" (UniqueName: \"kubernetes.io/projected/648152ca-1c66-4843-ad6d-20450aa26819-kube-api-access-6t2rg\") pod \"nova-cell0-cell-mapping-p72s9\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.215842 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.217326 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.228352 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.241285 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.264815 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-config-data\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.264964 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5dw5\" (UniqueName: \"kubernetes.io/projected/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-kube-api-access-z5dw5\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.264999 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.286518 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.287118 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.322316 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-config-data\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.367642 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-config-data\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.367685 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97ghh\" (UniqueName: \"kubernetes.io/projected/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-kube-api-access-97ghh\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.367840 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.367865 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-logs\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.374066 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5dw5\" (UniqueName: \"kubernetes.io/projected/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-kube-api-access-z5dw5\") pod \"nova-scheduler-0\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.407150 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.424064 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.425227 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.444885 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.477158 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-config-data\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.477199 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97ghh\" (UniqueName: \"kubernetes.io/projected/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-kube-api-access-97ghh\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.477473 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.477496 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-logs\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.477897 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-logs\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.482156 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.489659 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-config-data\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.514862 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.535323 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97ghh\" (UniqueName: \"kubernetes.io/projected/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-kube-api-access-97ghh\") pod \"nova-metadata-0\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.578459 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.578524 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf787\" (UniqueName: \"kubernetes.io/projected/cb04d115-0144-400c-a1de-68d8f357c395-kube-api-access-hf787\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.578582 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.582641 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.592080 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-km2bx"] Dec 08 09:33:41 crc kubenswrapper[4662]: E1208 09:33:41.592630 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="proxy-httpd" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.592708 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="proxy-httpd" Dec 08 09:33:41 crc kubenswrapper[4662]: E1208 09:33:41.592807 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="sg-core" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.592872 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="sg-core" Dec 08 09:33:41 crc kubenswrapper[4662]: E1208 09:33:41.592998 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-notification-agent" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.593070 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-notification-agent" Dec 08 09:33:41 crc kubenswrapper[4662]: E1208 09:33:41.593139 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-central-agent" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.593200 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-central-agent" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.593472 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="proxy-httpd" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.593552 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-central-agent" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.593629 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="ceilometer-notification-agent" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.593707 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" containerName="sg-core" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.595118 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.654929 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-km2bx"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.669835 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.671541 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.677859 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.680006 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-scripts\") pod \"407e9986-72b0-4507-bbad-2c1a424838fc\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.680174 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-combined-ca-bundle\") pod \"407e9986-72b0-4507-bbad-2c1a424838fc\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.680315 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-run-httpd\") pod \"407e9986-72b0-4507-bbad-2c1a424838fc\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.680390 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nhrn\" (UniqueName: \"kubernetes.io/projected/407e9986-72b0-4507-bbad-2c1a424838fc-kube-api-access-5nhrn\") pod \"407e9986-72b0-4507-bbad-2c1a424838fc\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.680492 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-sg-core-conf-yaml\") pod \"407e9986-72b0-4507-bbad-2c1a424838fc\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.680608 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-config-data\") pod \"407e9986-72b0-4507-bbad-2c1a424838fc\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.680806 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-log-httpd\") pod \"407e9986-72b0-4507-bbad-2c1a424838fc\" (UID: \"407e9986-72b0-4507-bbad-2c1a424838fc\") " Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681146 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681253 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681393 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-468zc\" (UniqueName: \"kubernetes.io/projected/02568702-19da-4124-bbab-bb1e3cf80e48-kube-api-access-468zc\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681519 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681644 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681840 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02568702-19da-4124-bbab-bb1e3cf80e48-logs\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681917 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-config\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.681985 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qd5md\" (UniqueName: \"kubernetes.io/projected/44fba630-a42b-4233-a201-95137e220c54-kube-api-access-qd5md\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.682658 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.682811 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.682953 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf787\" (UniqueName: \"kubernetes.io/projected/cb04d115-0144-400c-a1de-68d8f357c395-kube-api-access-hf787\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.683092 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-config-data\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.688267 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "407e9986-72b0-4507-bbad-2c1a424838fc" (UID: "407e9986-72b0-4507-bbad-2c1a424838fc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.689641 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "407e9986-72b0-4507-bbad-2c1a424838fc" (UID: "407e9986-72b0-4507-bbad-2c1a424838fc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.697382 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.705855 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.711136 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.711899 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/407e9986-72b0-4507-bbad-2c1a424838fc-kube-api-access-5nhrn" (OuterVolumeSpecName: "kube-api-access-5nhrn") pod "407e9986-72b0-4507-bbad-2c1a424838fc" (UID: "407e9986-72b0-4507-bbad-2c1a424838fc"). InnerVolumeSpecName "kube-api-access-5nhrn". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.712820 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-scripts" (OuterVolumeSpecName: "scripts") pod "407e9986-72b0-4507-bbad-2c1a424838fc" (UID: "407e9986-72b0-4507-bbad-2c1a424838fc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.734407 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf787\" (UniqueName: \"kubernetes.io/projected/cb04d115-0144-400c-a1de-68d8f357c395-kube-api-access-hf787\") pod \"nova-cell1-novncproxy-0\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.764421 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.779806 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786133 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786271 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-config-data\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786349 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786408 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-468zc\" (UniqueName: \"kubernetes.io/projected/02568702-19da-4124-bbab-bb1e3cf80e48-kube-api-access-468zc\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786425 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786468 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786489 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02568702-19da-4124-bbab-bb1e3cf80e48-logs\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786504 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-config\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786519 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qd5md\" (UniqueName: \"kubernetes.io/projected/44fba630-a42b-4233-a201-95137e220c54-kube-api-access-qd5md\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786620 4662 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786632 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nhrn\" (UniqueName: \"kubernetes.io/projected/407e9986-72b0-4507-bbad-2c1a424838fc-kube-api-access-5nhrn\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786643 4662 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/407e9986-72b0-4507-bbad-2c1a424838fc-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.786651 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.791026 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-sb\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.792076 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-nb\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.795071 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-dns-svc\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.795660 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "407e9986-72b0-4507-bbad-2c1a424838fc" (UID: "407e9986-72b0-4507-bbad-2c1a424838fc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.797132 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02568702-19da-4124-bbab-bb1e3cf80e48-logs\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.803777 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-config-data\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.809917 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.817124 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-config\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.834379 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qd5md\" (UniqueName: \"kubernetes.io/projected/44fba630-a42b-4233-a201-95137e220c54-kube-api-access-qd5md\") pod \"dnsmasq-dns-8b8cf6657-km2bx\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.837634 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-468zc\" (UniqueName: \"kubernetes.io/projected/02568702-19da-4124-bbab-bb1e3cf80e48-kube-api-access-468zc\") pod \"nova-api-0\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " pod="openstack/nova-api-0" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.887709 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "407e9986-72b0-4507-bbad-2c1a424838fc" (UID: "407e9986-72b0-4507-bbad-2c1a424838fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.888764 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.888794 4662 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.943491 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.948845 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-config-data" (OuterVolumeSpecName: "config-data") pod "407e9986-72b0-4507-bbad-2c1a424838fc" (UID: "407e9986-72b0-4507-bbad-2c1a424838fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:41 crc kubenswrapper[4662]: I1208 09:33:41.996718 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/407e9986-72b0-4507-bbad-2c1a424838fc-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.027991 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.134662 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-p72s9"] Dec 08 09:33:42 crc kubenswrapper[4662]: W1208 09:33:42.188001 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod648152ca_1c66_4843_ad6d_20450aa26819.slice/crio-4b3e27d527e7bef9367e4e673ff08bff200c06478c9675d23f184292f74fc695 WatchSource:0}: Error finding container 4b3e27d527e7bef9367e4e673ff08bff200c06478c9675d23f184292f74fc695: Status 404 returned error can't find the container with id 4b3e27d527e7bef9367e4e673ff08bff200c06478c9675d23f184292f74fc695 Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.202251 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"407e9986-72b0-4507-bbad-2c1a424838fc","Type":"ContainerDied","Data":"d51af51bcfeb984b7bf61c0721b87534480375ea433507733c2c3a8891cd5907"} Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.202307 4662 scope.go:117] "RemoveContainer" containerID="11bb3e605b9877522cb0bc542f189f66aa24ee5188434b2ff557f4044d2e0d38" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.202515 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.268147 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.283823 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.287360 4662 scope.go:117] "RemoveContainer" containerID="4a929e9e435921eab430eb90408d8b09af2b70a8867855dffe1d57e532bfda8e" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.309002 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.311252 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.313780 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.314001 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.333422 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.358500 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.411717 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.411786 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-config-data\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.412833 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp4tt\" (UniqueName: \"kubernetes.io/projected/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-kube-api-access-cp4tt\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.412888 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.412920 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-scripts\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.412977 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.413001 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.440541 4662 scope.go:117] "RemoveContainer" containerID="df5ca0f881c2f983bb986ffc27d27858bb0b8ce257da6e777394424de44a1e37" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.498886 4662 scope.go:117] "RemoveContainer" containerID="f2527f285a272cc798904fe78014544c79cf6638b4c34684d86b972c81d85033" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.514782 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-scripts\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.515215 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.515416 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.515501 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.515525 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-config-data\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.515596 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp4tt\" (UniqueName: \"kubernetes.io/projected/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-kube-api-access-cp4tt\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.515633 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.516499 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-run-httpd\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.518583 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-log-httpd\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.527542 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-scripts\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.528264 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.533593 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.535379 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-config-data\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.539378 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp4tt\" (UniqueName: \"kubernetes.io/projected/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-kube-api-access-cp4tt\") pod \"ceilometer-0\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: W1208 09:33:42.598626 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb04d115_0144_400c_a1de_68d8f357c395.slice/crio-e4fc2d278211d69c36e75a5812e88dc85ba1be95add0e17064dea287aa96b6f7 WatchSource:0}: Error finding container e4fc2d278211d69c36e75a5812e88dc85ba1be95add0e17064dea287aa96b6f7: Status 404 returned error can't find the container with id e4fc2d278211d69c36e75a5812e88dc85ba1be95add0e17064dea287aa96b6f7 Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.622181 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.653607 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.750869 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="407e9986-72b0-4507-bbad-2c1a424838fc" path="/var/lib/kubelet/pods/407e9986-72b0-4507-bbad-2c1a424838fc/volumes" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.756782 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:42 crc kubenswrapper[4662]: W1208 09:33:42.763847 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a4e74ac_ad04_499a_80c5_d7e008f1aefd.slice/crio-5039e065efb4cc336933b898fd4875466420bbd243818aab23f59164933f50de WatchSource:0}: Error finding container 5039e065efb4cc336933b898fd4875466420bbd243818aab23f59164933f50de: Status 404 returned error can't find the container with id 5039e065efb4cc336933b898fd4875466420bbd243818aab23f59164933f50de Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.801816 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-km2bx"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.885662 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cglm9"] Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.887022 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.891306 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.891609 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Dec 08 09:33:42 crc kubenswrapper[4662]: I1208 09:33:42.978842 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cglm9"] Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.032513 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.032683 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-scripts\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.032711 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94fmt\" (UniqueName: \"kubernetes.io/projected/2943a258-ba1d-4a9d-a6c9-e1817b52d458-kube-api-access-94fmt\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.032866 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-config-data\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.034781 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.135727 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-scripts\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.135787 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94fmt\" (UniqueName: \"kubernetes.io/projected/2943a258-ba1d-4a9d-a6c9-e1817b52d458-kube-api-access-94fmt\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.135856 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-config-data\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.135913 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.141389 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.141889 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-scripts\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.142916 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-config-data\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.161227 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94fmt\" (UniqueName: \"kubernetes.io/projected/2943a258-ba1d-4a9d-a6c9-e1817b52d458-kube-api-access-94fmt\") pod \"nova-cell1-conductor-db-sync-cglm9\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.222255 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02568702-19da-4124-bbab-bb1e3cf80e48","Type":"ContainerStarted","Data":"a4568cfda34fa9c3c1747937db7782f6efd775e2323bb81bfd5274f8979e42b4"} Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.229686 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p72s9" event={"ID":"648152ca-1c66-4843-ad6d-20450aa26819","Type":"ContainerStarted","Data":"822152dc55f3b3036e5ec659c66c2da5b01b133b23148256f51877c8e343d822"} Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.229724 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p72s9" event={"ID":"648152ca-1c66-4843-ad6d-20450aa26819","Type":"ContainerStarted","Data":"4b3e27d527e7bef9367e4e673ff08bff200c06478c9675d23f184292f74fc695"} Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.241229 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae27a002-7b9f-48d6-afa3-c5682dfee2f1","Type":"ContainerStarted","Data":"a86110c04d94704f303a9a6be158293504e32841a4b43ad886b22435da206f06"} Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.247564 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-p72s9" podStartSLOduration=3.24754873 podStartE2EDuration="3.24754873s" podCreationTimestamp="2025-12-08 09:33:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:33:43.247507479 +0000 UTC m=+1146.816535469" watchObservedRunningTime="2025-12-08 09:33:43.24754873 +0000 UTC m=+1146.816576720" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.248094 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb04d115-0144-400c-a1de-68d8f357c395","Type":"ContainerStarted","Data":"e4fc2d278211d69c36e75a5812e88dc85ba1be95add0e17064dea287aa96b6f7"} Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.251127 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a4e74ac-ad04-499a-80c5-d7e008f1aefd","Type":"ContainerStarted","Data":"5039e065efb4cc336933b898fd4875466420bbd243818aab23f59164933f50de"} Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.255234 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" event={"ID":"44fba630-a42b-4233-a201-95137e220c54","Type":"ContainerStarted","Data":"4ead46675fa403c0ce6db97af9f098b8085ffd9da7b0b0fe415094bffc1d8761"} Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.300215 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:33:43 crc kubenswrapper[4662]: W1208 09:33:43.346902 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c6f7f6c_bee3_4a59_83a7_b45b28becfa3.slice/crio-fd24a4f5b3c4e3a79bc3fb4314d531bc6bd596ef4a4c57daa0839c1f9f2073ef WatchSource:0}: Error finding container fd24a4f5b3c4e3a79bc3fb4314d531bc6bd596ef4a4c57daa0839c1f9f2073ef: Status 404 returned error can't find the container with id fd24a4f5b3c4e3a79bc3fb4314d531bc6bd596ef4a4c57daa0839c1f9f2073ef Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.413259 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:43 crc kubenswrapper[4662]: I1208 09:33:43.982542 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cglm9"] Dec 08 09:33:44 crc kubenswrapper[4662]: W1208 09:33:44.006928 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2943a258_ba1d_4a9d_a6c9_e1817b52d458.slice/crio-0c2096167a5a3db8ae2021758ad20fc7039e64f38e7ea746140da344c41d433a WatchSource:0}: Error finding container 0c2096167a5a3db8ae2021758ad20fc7039e64f38e7ea746140da344c41d433a: Status 404 returned error can't find the container with id 0c2096167a5a3db8ae2021758ad20fc7039e64f38e7ea746140da344c41d433a Dec 08 09:33:44 crc kubenswrapper[4662]: I1208 09:33:44.270276 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerStarted","Data":"fd24a4f5b3c4e3a79bc3fb4314d531bc6bd596ef4a4c57daa0839c1f9f2073ef"} Dec 08 09:33:44 crc kubenswrapper[4662]: I1208 09:33:44.271404 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cglm9" event={"ID":"2943a258-ba1d-4a9d-a6c9-e1817b52d458","Type":"ContainerStarted","Data":"0c2096167a5a3db8ae2021758ad20fc7039e64f38e7ea746140da344c41d433a"} Dec 08 09:33:44 crc kubenswrapper[4662]: I1208 09:33:44.273141 4662 generic.go:334] "Generic (PLEG): container finished" podID="44fba630-a42b-4233-a201-95137e220c54" containerID="478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130" exitCode=0 Dec 08 09:33:44 crc kubenswrapper[4662]: I1208 09:33:44.275368 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" event={"ID":"44fba630-a42b-4233-a201-95137e220c54","Type":"ContainerDied","Data":"478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130"} Dec 08 09:33:46 crc kubenswrapper[4662]: I1208 09:33:46.039706 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:46 crc kubenswrapper[4662]: I1208 09:33:46.051066 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:33:46 crc kubenswrapper[4662]: I1208 09:33:46.300734 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cglm9" event={"ID":"2943a258-ba1d-4a9d-a6c9-e1817b52d458","Type":"ContainerStarted","Data":"4468eb7e87a0280c824d0909a775014426c3b2d69d19645dfbba2d0b9c45c859"} Dec 08 09:33:46 crc kubenswrapper[4662]: I1208 09:33:46.320343 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-cglm9" podStartSLOduration=4.320325098 podStartE2EDuration="4.320325098s" podCreationTimestamp="2025-12-08 09:33:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:33:46.317684888 +0000 UTC m=+1149.886712878" watchObservedRunningTime="2025-12-08 09:33:46.320325098 +0000 UTC m=+1149.889353088" Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.314241 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb04d115-0144-400c-a1de-68d8f357c395","Type":"ContainerStarted","Data":"141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d"} Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.314397 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="cb04d115-0144-400c-a1de-68d8f357c395" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d" gracePeriod=30 Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.333540 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a4e74ac-ad04-499a-80c5-d7e008f1aefd","Type":"ContainerStarted","Data":"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e"} Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.341904 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02568702-19da-4124-bbab-bb1e3cf80e48","Type":"ContainerStarted","Data":"9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26"} Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.342499 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.297297807 podStartE2EDuration="6.342483295s" podCreationTimestamp="2025-12-08 09:33:41 +0000 UTC" firstStartedPulling="2025-12-08 09:33:42.613875759 +0000 UTC m=+1146.182903749" lastFinishedPulling="2025-12-08 09:33:46.659061247 +0000 UTC m=+1150.228089237" observedRunningTime="2025-12-08 09:33:47.337943814 +0000 UTC m=+1150.906971794" watchObservedRunningTime="2025-12-08 09:33:47.342483295 +0000 UTC m=+1150.911511285" Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.352729 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae27a002-7b9f-48d6-afa3-c5682dfee2f1","Type":"ContainerStarted","Data":"4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d"} Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.359301 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" event={"ID":"44fba630-a42b-4233-a201-95137e220c54","Type":"ContainerStarted","Data":"8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75"} Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.360033 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.364075 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerStarted","Data":"07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91"} Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.379543 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.034414019 podStartE2EDuration="6.379521422s" podCreationTimestamp="2025-12-08 09:33:41 +0000 UTC" firstStartedPulling="2025-12-08 09:33:42.355401099 +0000 UTC m=+1145.924429089" lastFinishedPulling="2025-12-08 09:33:46.700508502 +0000 UTC m=+1150.269536492" observedRunningTime="2025-12-08 09:33:47.372342651 +0000 UTC m=+1150.941370641" watchObservedRunningTime="2025-12-08 09:33:47.379521422 +0000 UTC m=+1150.948549422" Dec 08 09:33:47 crc kubenswrapper[4662]: I1208 09:33:47.405181 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" podStartSLOduration=6.405166356 podStartE2EDuration="6.405166356s" podCreationTimestamp="2025-12-08 09:33:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:33:47.401045856 +0000 UTC m=+1150.970073856" watchObservedRunningTime="2025-12-08 09:33:47.405166356 +0000 UTC m=+1150.974194336" Dec 08 09:33:48 crc kubenswrapper[4662]: I1208 09:33:48.379784 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a4e74ac-ad04-499a-80c5-d7e008f1aefd","Type":"ContainerStarted","Data":"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b"} Dec 08 09:33:48 crc kubenswrapper[4662]: I1208 09:33:48.380108 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-log" containerID="cri-o://0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e" gracePeriod=30 Dec 08 09:33:48 crc kubenswrapper[4662]: I1208 09:33:48.381118 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-metadata" containerID="cri-o://a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b" gracePeriod=30 Dec 08 09:33:48 crc kubenswrapper[4662]: I1208 09:33:48.382899 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02568702-19da-4124-bbab-bb1e3cf80e48","Type":"ContainerStarted","Data":"7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be"} Dec 08 09:33:48 crc kubenswrapper[4662]: I1208 09:33:48.397502 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerStarted","Data":"e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648"} Dec 08 09:33:48 crc kubenswrapper[4662]: I1208 09:33:48.406975 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.480426074 podStartE2EDuration="7.40695888s" podCreationTimestamp="2025-12-08 09:33:41 +0000 UTC" firstStartedPulling="2025-12-08 09:33:42.766384374 +0000 UTC m=+1146.335412364" lastFinishedPulling="2025-12-08 09:33:46.69291718 +0000 UTC m=+1150.261945170" observedRunningTime="2025-12-08 09:33:48.404212537 +0000 UTC m=+1151.973240527" watchObservedRunningTime="2025-12-08 09:33:48.40695888 +0000 UTC m=+1151.975986870" Dec 08 09:33:48 crc kubenswrapper[4662]: I1208 09:33:48.450319 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.775074639 podStartE2EDuration="7.450201793s" podCreationTimestamp="2025-12-08 09:33:41 +0000 UTC" firstStartedPulling="2025-12-08 09:33:43.02320813 +0000 UTC m=+1146.592236120" lastFinishedPulling="2025-12-08 09:33:46.698335284 +0000 UTC m=+1150.267363274" observedRunningTime="2025-12-08 09:33:48.435207443 +0000 UTC m=+1152.004235433" watchObservedRunningTime="2025-12-08 09:33:48.450201793 +0000 UTC m=+1152.019229783" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.310715 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.378823 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-combined-ca-bundle\") pod \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.379931 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-logs\") pod \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.379968 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-config-data\") pod \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.380284 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-97ghh\" (UniqueName: \"kubernetes.io/projected/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-kube-api-access-97ghh\") pod \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\" (UID: \"0a4e74ac-ad04-499a-80c5-d7e008f1aefd\") " Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.380500 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-logs" (OuterVolumeSpecName: "logs") pod "0a4e74ac-ad04-499a-80c5-d7e008f1aefd" (UID: "0a4e74ac-ad04-499a-80c5-d7e008f1aefd"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.380975 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.404852 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a4e74ac-ad04-499a-80c5-d7e008f1aefd" (UID: "0a4e74ac-ad04-499a-80c5-d7e008f1aefd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.412962 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-kube-api-access-97ghh" (OuterVolumeSpecName: "kube-api-access-97ghh") pod "0a4e74ac-ad04-499a-80c5-d7e008f1aefd" (UID: "0a4e74ac-ad04-499a-80c5-d7e008f1aefd"). InnerVolumeSpecName "kube-api-access-97ghh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.422366 4662 generic.go:334] "Generic (PLEG): container finished" podID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerID="a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b" exitCode=0 Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.422405 4662 generic.go:334] "Generic (PLEG): container finished" podID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerID="0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e" exitCode=143 Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.422459 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a4e74ac-ad04-499a-80c5-d7e008f1aefd","Type":"ContainerDied","Data":"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b"} Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.422490 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a4e74ac-ad04-499a-80c5-d7e008f1aefd","Type":"ContainerDied","Data":"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e"} Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.422514 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"0a4e74ac-ad04-499a-80c5-d7e008f1aefd","Type":"ContainerDied","Data":"5039e065efb4cc336933b898fd4875466420bbd243818aab23f59164933f50de"} Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.422529 4662 scope.go:117] "RemoveContainer" containerID="a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.422656 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.433662 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-config-data" (OuterVolumeSpecName: "config-data") pod "0a4e74ac-ad04-499a-80c5-d7e008f1aefd" (UID: "0a4e74ac-ad04-499a-80c5-d7e008f1aefd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.445062 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerStarted","Data":"031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf"} Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.473534 4662 scope.go:117] "RemoveContainer" containerID="0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.483018 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.483049 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.483059 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-97ghh\" (UniqueName: \"kubernetes.io/projected/0a4e74ac-ad04-499a-80c5-d7e008f1aefd-kube-api-access-97ghh\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.488545 4662 scope.go:117] "RemoveContainer" containerID="a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b" Dec 08 09:33:49 crc kubenswrapper[4662]: E1208 09:33:49.488917 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b\": container with ID starting with a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b not found: ID does not exist" containerID="a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.488962 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b"} err="failed to get container status \"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b\": rpc error: code = NotFound desc = could not find container \"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b\": container with ID starting with a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b not found: ID does not exist" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.488988 4662 scope.go:117] "RemoveContainer" containerID="0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e" Dec 08 09:33:49 crc kubenswrapper[4662]: E1208 09:33:49.489491 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e\": container with ID starting with 0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e not found: ID does not exist" containerID="0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.489523 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e"} err="failed to get container status \"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e\": rpc error: code = NotFound desc = could not find container \"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e\": container with ID starting with 0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e not found: ID does not exist" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.489555 4662 scope.go:117] "RemoveContainer" containerID="a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.489827 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b"} err="failed to get container status \"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b\": rpc error: code = NotFound desc = could not find container \"a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b\": container with ID starting with a64a79906dc10b96afe331d6ff691f96aa0ddf8a6694098cafa35c2692dcfb6b not found: ID does not exist" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.489857 4662 scope.go:117] "RemoveContainer" containerID="0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.490099 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e"} err="failed to get container status \"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e\": rpc error: code = NotFound desc = could not find container \"0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e\": container with ID starting with 0b8f348c73aecd4a54fb03c4bba4c490c9482de3f85fe9aaa49902cf9489fb7e not found: ID does not exist" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.757984 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.767568 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.814669 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:49 crc kubenswrapper[4662]: E1208 09:33:49.816758 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-log" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.816788 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-log" Dec 08 09:33:49 crc kubenswrapper[4662]: E1208 09:33:49.816855 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-metadata" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.816863 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-metadata" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.818225 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-log" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.818270 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" containerName="nova-metadata-metadata" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.826906 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.830602 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.832041 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.842029 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.931693 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dwzh\" (UniqueName: \"kubernetes.io/projected/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-kube-api-access-9dwzh\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.931765 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-config-data\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.931792 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-logs\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.931822 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:49 crc kubenswrapper[4662]: I1208 09:33:49.931846 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.033625 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dwzh\" (UniqueName: \"kubernetes.io/projected/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-kube-api-access-9dwzh\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.033665 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-config-data\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.033690 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-logs\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.033719 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.033757 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.034292 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-logs\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.038336 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-config-data\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.045231 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.051008 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.052055 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dwzh\" (UniqueName: \"kubernetes.io/projected/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-kube-api-access-9dwzh\") pod \"nova-metadata-0\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.241684 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.715385 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a4e74ac-ad04-499a-80c5-d7e008f1aefd" path="/var/lib/kubelet/pods/0a4e74ac-ad04-499a-80c5-d7e008f1aefd/volumes" Dec 08 09:33:50 crc kubenswrapper[4662]: I1208 09:33:50.765640 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:50 crc kubenswrapper[4662]: W1208 09:33:50.769642 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7bc5f95_4ab4_48fd_990a_2ec8c37fca8d.slice/crio-767bf5faed7f8b41de3ff5bf16d624e6e7075465dfdd25045fcb9c531cebe2aa WatchSource:0}: Error finding container 767bf5faed7f8b41de3ff5bf16d624e6e7075465dfdd25045fcb9c531cebe2aa: Status 404 returned error can't find the container with id 767bf5faed7f8b41de3ff5bf16d624e6e7075465dfdd25045fcb9c531cebe2aa Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.409077 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.409419 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.457262 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.459973 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d","Type":"ContainerStarted","Data":"ef7194032b0be40581e38633fb3c01c94fffea56fbe6d4160013e3eb2f94c404"} Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.460001 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d","Type":"ContainerStarted","Data":"a1bcb63f636ea1e06dbb6e38556557de8fa661927eb6c7004c020a44062f02f9"} Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.460012 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d","Type":"ContainerStarted","Data":"767bf5faed7f8b41de3ff5bf16d624e6e7075465dfdd25045fcb9c531cebe2aa"} Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.500274 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.500248595 podStartE2EDuration="2.500248595s" podCreationTimestamp="2025-12-08 09:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:33:51.494670076 +0000 UTC m=+1155.063698066" watchObservedRunningTime="2025-12-08 09:33:51.500248595 +0000 UTC m=+1155.069276585" Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.504663 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 08 09:33:51 crc kubenswrapper[4662]: I1208 09:33:51.780405 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:33:52 crc kubenswrapper[4662]: I1208 09:33:52.029175 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:33:52 crc kubenswrapper[4662]: I1208 09:33:52.029229 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:33:53 crc kubenswrapper[4662]: I1208 09:33:53.111125 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:33:53 crc kubenswrapper[4662]: I1208 09:33:53.111436 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.174:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:33:53 crc kubenswrapper[4662]: I1208 09:33:53.476544 4662 generic.go:334] "Generic (PLEG): container finished" podID="648152ca-1c66-4843-ad6d-20450aa26819" containerID="822152dc55f3b3036e5ec659c66c2da5b01b133b23148256f51877c8e343d822" exitCode=0 Dec 08 09:33:53 crc kubenswrapper[4662]: I1208 09:33:53.476586 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p72s9" event={"ID":"648152ca-1c66-4843-ad6d-20450aa26819","Type":"ContainerDied","Data":"822152dc55f3b3036e5ec659c66c2da5b01b133b23148256f51877c8e343d822"} Dec 08 09:33:54 crc kubenswrapper[4662]: I1208 09:33:54.487351 4662 generic.go:334] "Generic (PLEG): container finished" podID="2943a258-ba1d-4a9d-a6c9-e1817b52d458" containerID="4468eb7e87a0280c824d0909a775014426c3b2d69d19645dfbba2d0b9c45c859" exitCode=0 Dec 08 09:33:54 crc kubenswrapper[4662]: I1208 09:33:54.487488 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cglm9" event={"ID":"2943a258-ba1d-4a9d-a6c9-e1817b52d458","Type":"ContainerDied","Data":"4468eb7e87a0280c824d0909a775014426c3b2d69d19645dfbba2d0b9c45c859"} Dec 08 09:33:54 crc kubenswrapper[4662]: I1208 09:33:54.491536 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerStarted","Data":"8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773"} Dec 08 09:33:54 crc kubenswrapper[4662]: I1208 09:33:54.539413 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.624432293 podStartE2EDuration="12.539395756s" podCreationTimestamp="2025-12-08 09:33:42 +0000 UTC" firstStartedPulling="2025-12-08 09:33:43.349651172 +0000 UTC m=+1146.918679162" lastFinishedPulling="2025-12-08 09:33:53.264614625 +0000 UTC m=+1156.833642625" observedRunningTime="2025-12-08 09:33:54.531550417 +0000 UTC m=+1158.100578427" watchObservedRunningTime="2025-12-08 09:33:54.539395756 +0000 UTC m=+1158.108423746" Dec 08 09:33:54 crc kubenswrapper[4662]: I1208 09:33:54.917686 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.033099 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-combined-ca-bundle\") pod \"648152ca-1c66-4843-ad6d-20450aa26819\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.033152 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-scripts\") pod \"648152ca-1c66-4843-ad6d-20450aa26819\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.033174 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-config-data\") pod \"648152ca-1c66-4843-ad6d-20450aa26819\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.033219 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t2rg\" (UniqueName: \"kubernetes.io/projected/648152ca-1c66-4843-ad6d-20450aa26819-kube-api-access-6t2rg\") pod \"648152ca-1c66-4843-ad6d-20450aa26819\" (UID: \"648152ca-1c66-4843-ad6d-20450aa26819\") " Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.045939 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648152ca-1c66-4843-ad6d-20450aa26819-kube-api-access-6t2rg" (OuterVolumeSpecName: "kube-api-access-6t2rg") pod "648152ca-1c66-4843-ad6d-20450aa26819" (UID: "648152ca-1c66-4843-ad6d-20450aa26819"). InnerVolumeSpecName "kube-api-access-6t2rg". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.057713 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-scripts" (OuterVolumeSpecName: "scripts") pod "648152ca-1c66-4843-ad6d-20450aa26819" (UID: "648152ca-1c66-4843-ad6d-20450aa26819"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.065000 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "648152ca-1c66-4843-ad6d-20450aa26819" (UID: "648152ca-1c66-4843-ad6d-20450aa26819"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.066921 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-config-data" (OuterVolumeSpecName: "config-data") pod "648152ca-1c66-4843-ad6d-20450aa26819" (UID: "648152ca-1c66-4843-ad6d-20450aa26819"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.136063 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.136274 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.136334 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/648152ca-1c66-4843-ad6d-20450aa26819-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.136387 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t2rg\" (UniqueName: \"kubernetes.io/projected/648152ca-1c66-4843-ad6d-20450aa26819-kube-api-access-6t2rg\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.242701 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.242773 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.508023 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-p72s9" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.513301 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-p72s9" event={"ID":"648152ca-1c66-4843-ad6d-20450aa26819","Type":"ContainerDied","Data":"4b3e27d527e7bef9367e4e673ff08bff200c06478c9675d23f184292f74fc695"} Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.513374 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3e27d527e7bef9367e4e673ff08bff200c06478c9675d23f184292f74fc695" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.513412 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.706231 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.706552 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-api" containerID="cri-o://7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be" gracePeriod=30 Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.706711 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-log" containerID="cri-o://9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26" gracePeriod=30 Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.717460 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.717648 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="ae27a002-7b9f-48d6-afa3-c5682dfee2f1" containerName="nova-scheduler-scheduler" containerID="cri-o://4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" gracePeriod=30 Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.766341 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.766931 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-metadata" containerID="cri-o://ef7194032b0be40581e38633fb3c01c94fffea56fbe6d4160013e3eb2f94c404" gracePeriod=30 Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.767128 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-log" containerID="cri-o://a1bcb63f636ea1e06dbb6e38556557de8fa661927eb6c7004c020a44062f02f9" gracePeriod=30 Dec 08 09:33:55 crc kubenswrapper[4662]: I1208 09:33:55.967683 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.059947 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-scripts\") pod \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.060016 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-combined-ca-bundle\") pod \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.060146 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94fmt\" (UniqueName: \"kubernetes.io/projected/2943a258-ba1d-4a9d-a6c9-e1817b52d458-kube-api-access-94fmt\") pod \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.060172 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-config-data\") pod \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\" (UID: \"2943a258-ba1d-4a9d-a6c9-e1817b52d458\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.066136 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2943a258-ba1d-4a9d-a6c9-e1817b52d458-kube-api-access-94fmt" (OuterVolumeSpecName: "kube-api-access-94fmt") pod "2943a258-ba1d-4a9d-a6c9-e1817b52d458" (UID: "2943a258-ba1d-4a9d-a6c9-e1817b52d458"). InnerVolumeSpecName "kube-api-access-94fmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.066943 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-scripts" (OuterVolumeSpecName: "scripts") pod "2943a258-ba1d-4a9d-a6c9-e1817b52d458" (UID: "2943a258-ba1d-4a9d-a6c9-e1817b52d458"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.103382 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-config-data" (OuterVolumeSpecName: "config-data") pod "2943a258-ba1d-4a9d-a6c9-e1817b52d458" (UID: "2943a258-ba1d-4a9d-a6c9-e1817b52d458"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.116361 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2943a258-ba1d-4a9d-a6c9-e1817b52d458" (UID: "2943a258-ba1d-4a9d-a6c9-e1817b52d458"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.161691 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94fmt\" (UniqueName: \"kubernetes.io/projected/2943a258-ba1d-4a9d-a6c9-e1817b52d458-kube-api-access-94fmt\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.161729 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.161765 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.161778 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2943a258-ba1d-4a9d-a6c9-e1817b52d458-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.328963 4662 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7bc5f95_4ab4_48fd_990a_2ec8c37fca8d.slice/crio-ef7194032b0be40581e38633fb3c01c94fffea56fbe6d4160013e3eb2f94c404.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc7bc5f95_4ab4_48fd_990a_2ec8c37fca8d.slice/crio-conmon-ef7194032b0be40581e38633fb3c01c94fffea56fbe6d4160013e3eb2f94c404.scope\": RecentStats: unable to find data in memory cache]" Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.411645 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.415637 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.419818 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.419897 4662 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="ae27a002-7b9f-48d6-afa3-c5682dfee2f1" containerName="nova-scheduler-scheduler" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.543713 4662 generic.go:334] "Generic (PLEG): container finished" podID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerID="ef7194032b0be40581e38633fb3c01c94fffea56fbe6d4160013e3eb2f94c404" exitCode=0 Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.543760 4662 generic.go:334] "Generic (PLEG): container finished" podID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerID="a1bcb63f636ea1e06dbb6e38556557de8fa661927eb6c7004c020a44062f02f9" exitCode=143 Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.543809 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d","Type":"ContainerDied","Data":"ef7194032b0be40581e38633fb3c01c94fffea56fbe6d4160013e3eb2f94c404"} Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.543835 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d","Type":"ContainerDied","Data":"a1bcb63f636ea1e06dbb6e38556557de8fa661927eb6c7004c020a44062f02f9"} Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.546021 4662 generic.go:334] "Generic (PLEG): container finished" podID="02568702-19da-4124-bbab-bb1e3cf80e48" containerID="9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26" exitCode=143 Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.546077 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02568702-19da-4124-bbab-bb1e3cf80e48","Type":"ContainerDied","Data":"9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26"} Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.549496 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-cglm9" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.551427 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-cglm9" event={"ID":"2943a258-ba1d-4a9d-a6c9-e1817b52d458","Type":"ContainerDied","Data":"0c2096167a5a3db8ae2021758ad20fc7039e64f38e7ea746140da344c41d433a"} Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.551470 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c2096167a5a3db8ae2021758ad20fc7039e64f38e7ea746140da344c41d433a" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.585473 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.616925 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.617388 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2943a258-ba1d-4a9d-a6c9-e1817b52d458" containerName="nova-cell1-conductor-db-sync" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617402 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2943a258-ba1d-4a9d-a6c9-e1817b52d458" containerName="nova-cell1-conductor-db-sync" Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.617434 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-log" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617440 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-log" Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.617461 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648152ca-1c66-4843-ad6d-20450aa26819" containerName="nova-manage" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617470 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="648152ca-1c66-4843-ad6d-20450aa26819" containerName="nova-manage" Dec 08 09:33:56 crc kubenswrapper[4662]: E1208 09:33:56.617487 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-metadata" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617493 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-metadata" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617717 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-metadata" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617755 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="648152ca-1c66-4843-ad6d-20450aa26819" containerName="nova-manage" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617762 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" containerName="nova-metadata-log" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.617770 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2943a258-ba1d-4a9d-a6c9-e1817b52d458" containerName="nova-cell1-conductor-db-sync" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.618415 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.627830 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.630158 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.674021 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-logs\") pod \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.674100 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-nova-metadata-tls-certs\") pod \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.674223 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-combined-ca-bundle\") pod \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.674255 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dwzh\" (UniqueName: \"kubernetes.io/projected/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-kube-api-access-9dwzh\") pod \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.674272 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-config-data\") pod \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\" (UID: \"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d\") " Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.674512 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-logs" (OuterVolumeSpecName: "logs") pod "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" (UID: "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.674717 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.679550 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-kube-api-access-9dwzh" (OuterVolumeSpecName: "kube-api-access-9dwzh") pod "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" (UID: "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d"). InnerVolumeSpecName "kube-api-access-9dwzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.707806 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-config-data" (OuterVolumeSpecName: "config-data") pod "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" (UID: "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.714814 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" (UID: "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.728713 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" (UID: "c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.776549 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcr8b\" (UniqueName: \"kubernetes.io/projected/a8d481ea-222a-4dbb-9292-e576334d6d45-kube-api-access-zcr8b\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.776640 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d481ea-222a-4dbb-9292-e576334d6d45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.776702 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d481ea-222a-4dbb-9292-e576334d6d45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.777655 4662 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.777761 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.777789 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dwzh\" (UniqueName: \"kubernetes.io/projected/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-kube-api-access-9dwzh\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.777803 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.879718 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcr8b\" (UniqueName: \"kubernetes.io/projected/a8d481ea-222a-4dbb-9292-e576334d6d45-kube-api-access-zcr8b\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.879858 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d481ea-222a-4dbb-9292-e576334d6d45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.879921 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d481ea-222a-4dbb-9292-e576334d6d45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.884311 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a8d481ea-222a-4dbb-9292-e576334d6d45-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.886220 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a8d481ea-222a-4dbb-9292-e576334d6d45-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.898667 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcr8b\" (UniqueName: \"kubernetes.io/projected/a8d481ea-222a-4dbb-9292-e576334d6d45-kube-api-access-zcr8b\") pod \"nova-cell1-conductor-0\" (UID: \"a8d481ea-222a-4dbb-9292-e576334d6d45\") " pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.938721 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:56 crc kubenswrapper[4662]: I1208 09:33:56.946346 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.026927 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-4zljc"] Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.027710 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" podUID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerName="dnsmasq-dns" containerID="cri-o://c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996" gracePeriod=10 Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.482449 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.489446 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.560675 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.560683 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d","Type":"ContainerDied","Data":"767bf5faed7f8b41de3ff5bf16d624e6e7075465dfdd25045fcb9c531cebe2aa"} Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.560897 4662 scope.go:117] "RemoveContainer" containerID="ef7194032b0be40581e38633fb3c01c94fffea56fbe6d4160013e3eb2f94c404" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.565903 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a8d481ea-222a-4dbb-9292-e576334d6d45","Type":"ContainerStarted","Data":"034a075bbc69ad5a0406ebac16cce644f6cbf9396377cdcc5bc6d87639e2d3e9"} Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.570327 4662 generic.go:334] "Generic (PLEG): container finished" podID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerID="c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996" exitCode=0 Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.570356 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" event={"ID":"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5","Type":"ContainerDied","Data":"c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996"} Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.570375 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" event={"ID":"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5","Type":"ContainerDied","Data":"7e1ef37f2869a366597570bfbc1d7814598356d2984429c56817a741d3a4e3df"} Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.570427 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-58db5546cc-4zljc" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.595301 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-config\") pod \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.595430 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5bvz\" (UniqueName: \"kubernetes.io/projected/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-kube-api-access-d5bvz\") pod \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.595478 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-nb\") pod \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.595528 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-sb\") pod \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.595643 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-dns-svc\") pod \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\" (UID: \"63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5\") " Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.611525 4662 scope.go:117] "RemoveContainer" containerID="a1bcb63f636ea1e06dbb6e38556557de8fa661927eb6c7004c020a44062f02f9" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.618050 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-kube-api-access-d5bvz" (OuterVolumeSpecName: "kube-api-access-d5bvz") pod "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" (UID: "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5"). InnerVolumeSpecName "kube-api-access-d5bvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.656020 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.670483 4662 scope.go:117] "RemoveContainer" containerID="c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.678153 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-config" (OuterVolumeSpecName: "config") pod "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" (UID: "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.681040 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.689250 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:57 crc kubenswrapper[4662]: E1208 09:33:57.689656 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerName="dnsmasq-dns" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.689672 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerName="dnsmasq-dns" Dec 08 09:33:57 crc kubenswrapper[4662]: E1208 09:33:57.689700 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerName="init" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.689706 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerName="init" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.689904 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" containerName="dnsmasq-dns" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.690803 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.692756 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.693143 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.695152 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" (UID: "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.700438 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.703393 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.703625 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.703685 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5bvz\" (UniqueName: \"kubernetes.io/projected/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-kube-api-access-d5bvz\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.717660 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" (UID: "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.721137 4662 scope.go:117] "RemoveContainer" containerID="ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.722851 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" (UID: "63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.749896 4662 scope.go:117] "RemoveContainer" containerID="c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996" Dec 08 09:33:57 crc kubenswrapper[4662]: E1208 09:33:57.750460 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996\": container with ID starting with c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996 not found: ID does not exist" containerID="c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.750569 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996"} err="failed to get container status \"c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996\": rpc error: code = NotFound desc = could not find container \"c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996\": container with ID starting with c1401f1c691f0b00aa4038ee09da4406cdbe61a0001296f0ba25723b09860996 not found: ID does not exist" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.750605 4662 scope.go:117] "RemoveContainer" containerID="ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb" Dec 08 09:33:57 crc kubenswrapper[4662]: E1208 09:33:57.752469 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb\": container with ID starting with ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb not found: ID does not exist" containerID="ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.752577 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb"} err="failed to get container status \"ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb\": rpc error: code = NotFound desc = could not find container \"ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb\": container with ID starting with ff8b683fb774786c21ba69d712b33f09ae301f9ddef0c3e70bad551bf5f409bb not found: ID does not exist" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.805923 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.806244 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-config-data\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.806433 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-logs\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.806580 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.806790 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvl2d\" (UniqueName: \"kubernetes.io/projected/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-kube-api-access-fvl2d\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.807007 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.807121 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.911927 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.912331 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-config-data\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.912505 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-logs\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.912560 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.912630 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvl2d\" (UniqueName: \"kubernetes.io/projected/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-kube-api-access-fvl2d\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.913361 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-logs\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.923761 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-config-data\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.932044 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-4zljc"] Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.932614 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.937269 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.941883 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-58db5546cc-4zljc"] Dec 08 09:33:57 crc kubenswrapper[4662]: I1208 09:33:57.960096 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvl2d\" (UniqueName: \"kubernetes.io/projected/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-kube-api-access-fvl2d\") pod \"nova-metadata-0\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " pod="openstack/nova-metadata-0" Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.018833 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:33:58 crc kubenswrapper[4662]: W1208 09:33:58.455219 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb58a11d8_dde7_49c5_a1d6_78d6e5f7e4e7.slice/crio-a2c99578a102dabbcce3f4bfaada96444685aa334e41fbba0b8be537d7fec043 WatchSource:0}: Error finding container a2c99578a102dabbcce3f4bfaada96444685aa334e41fbba0b8be537d7fec043: Status 404 returned error can't find the container with id a2c99578a102dabbcce3f4bfaada96444685aa334e41fbba0b8be537d7fec043 Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.456887 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.585436 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"a8d481ea-222a-4dbb-9292-e576334d6d45","Type":"ContainerStarted","Data":"5580fde39fb8cd66eae84519fbaf10fd605a0ee0694561f5fc30b8a0cecf2a6b"} Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.585578 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.586653 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7","Type":"ContainerStarted","Data":"a2c99578a102dabbcce3f4bfaada96444685aa334e41fbba0b8be537d7fec043"} Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.610542 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.610524787 podStartE2EDuration="2.610524787s" podCreationTimestamp="2025-12-08 09:33:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:33:58.602607796 +0000 UTC m=+1162.171635796" watchObservedRunningTime="2025-12-08 09:33:58.610524787 +0000 UTC m=+1162.179552777" Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.726028 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5" path="/var/lib/kubelet/pods/63e87ba2-a13a-4ac0-aa08-16ea1e3eafd5/volumes" Dec 08 09:33:58 crc kubenswrapper[4662]: I1208 09:33:58.731809 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d" path="/var/lib/kubelet/pods/c7bc5f95-4ab4-48fd-990a-2ec8c37fca8d/volumes" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.325205 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.420277 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.441685 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-468zc\" (UniqueName: \"kubernetes.io/projected/02568702-19da-4124-bbab-bb1e3cf80e48-kube-api-access-468zc\") pod \"02568702-19da-4124-bbab-bb1e3cf80e48\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.441769 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-combined-ca-bundle\") pod \"02568702-19da-4124-bbab-bb1e3cf80e48\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.441886 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-config-data\") pod \"02568702-19da-4124-bbab-bb1e3cf80e48\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.441975 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02568702-19da-4124-bbab-bb1e3cf80e48-logs\") pod \"02568702-19da-4124-bbab-bb1e3cf80e48\" (UID: \"02568702-19da-4124-bbab-bb1e3cf80e48\") " Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.442979 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/02568702-19da-4124-bbab-bb1e3cf80e48-logs" (OuterVolumeSpecName: "logs") pod "02568702-19da-4124-bbab-bb1e3cf80e48" (UID: "02568702-19da-4124-bbab-bb1e3cf80e48"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.461332 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02568702-19da-4124-bbab-bb1e3cf80e48-kube-api-access-468zc" (OuterVolumeSpecName: "kube-api-access-468zc") pod "02568702-19da-4124-bbab-bb1e3cf80e48" (UID: "02568702-19da-4124-bbab-bb1e3cf80e48"). InnerVolumeSpecName "kube-api-access-468zc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.471946 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "02568702-19da-4124-bbab-bb1e3cf80e48" (UID: "02568702-19da-4124-bbab-bb1e3cf80e48"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.473929 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-config-data" (OuterVolumeSpecName: "config-data") pod "02568702-19da-4124-bbab-bb1e3cf80e48" (UID: "02568702-19da-4124-bbab-bb1e3cf80e48"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.543635 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-combined-ca-bundle\") pod \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.543720 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5dw5\" (UniqueName: \"kubernetes.io/projected/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-kube-api-access-z5dw5\") pod \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.543771 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-config-data\") pod \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\" (UID: \"ae27a002-7b9f-48d6-afa3-c5682dfee2f1\") " Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.544228 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.544253 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/02568702-19da-4124-bbab-bb1e3cf80e48-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.544264 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/02568702-19da-4124-bbab-bb1e3cf80e48-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.544274 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-468zc\" (UniqueName: \"kubernetes.io/projected/02568702-19da-4124-bbab-bb1e3cf80e48-kube-api-access-468zc\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.549134 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-kube-api-access-z5dw5" (OuterVolumeSpecName: "kube-api-access-z5dw5") pod "ae27a002-7b9f-48d6-afa3-c5682dfee2f1" (UID: "ae27a002-7b9f-48d6-afa3-c5682dfee2f1"). InnerVolumeSpecName "kube-api-access-z5dw5". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.568449 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-config-data" (OuterVolumeSpecName: "config-data") pod "ae27a002-7b9f-48d6-afa3-c5682dfee2f1" (UID: "ae27a002-7b9f-48d6-afa3-c5682dfee2f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.570541 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae27a002-7b9f-48d6-afa3-c5682dfee2f1" (UID: "ae27a002-7b9f-48d6-afa3-c5682dfee2f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.595419 4662 generic.go:334] "Generic (PLEG): container finished" podID="02568702-19da-4124-bbab-bb1e3cf80e48" containerID="7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be" exitCode=0 Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.595491 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02568702-19da-4124-bbab-bb1e3cf80e48","Type":"ContainerDied","Data":"7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be"} Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.595521 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"02568702-19da-4124-bbab-bb1e3cf80e48","Type":"ContainerDied","Data":"a4568cfda34fa9c3c1747937db7782f6efd775e2323bb81bfd5274f8979e42b4"} Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.595595 4662 scope.go:117] "RemoveContainer" containerID="7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.596807 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.598212 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7","Type":"ContainerStarted","Data":"ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73"} Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.598252 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7","Type":"ContainerStarted","Data":"2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d"} Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.599431 4662 generic.go:334] "Generic (PLEG): container finished" podID="ae27a002-7b9f-48d6-afa3-c5682dfee2f1" containerID="4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" exitCode=0 Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.599570 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae27a002-7b9f-48d6-afa3-c5682dfee2f1","Type":"ContainerDied","Data":"4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d"} Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.599614 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"ae27a002-7b9f-48d6-afa3-c5682dfee2f1","Type":"ContainerDied","Data":"a86110c04d94704f303a9a6be158293504e32841a4b43ad886b22435da206f06"} Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.599719 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.615787 4662 scope.go:117] "RemoveContainer" containerID="9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.622189 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6221709520000003 podStartE2EDuration="2.622170952s" podCreationTimestamp="2025-12-08 09:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:33:59.618148565 +0000 UTC m=+1163.187176565" watchObservedRunningTime="2025-12-08 09:33:59.622170952 +0000 UTC m=+1163.191198942" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.638927 4662 scope.go:117] "RemoveContainer" containerID="7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be" Dec 08 09:33:59 crc kubenswrapper[4662]: E1208 09:33:59.640210 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be\": container with ID starting with 7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be not found: ID does not exist" containerID="7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.640238 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be"} err="failed to get container status \"7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be\": rpc error: code = NotFound desc = could not find container \"7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be\": container with ID starting with 7e148ca404325df7165c19074ef37e1e901d127cefb558177bfe184b6f7916be not found: ID does not exist" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.640257 4662 scope.go:117] "RemoveContainer" containerID="9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26" Dec 08 09:33:59 crc kubenswrapper[4662]: E1208 09:33:59.640497 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26\": container with ID starting with 9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26 not found: ID does not exist" containerID="9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.640536 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26"} err="failed to get container status \"9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26\": rpc error: code = NotFound desc = could not find container \"9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26\": container with ID starting with 9b14881a0c18fa4ee39e79fbfc78d533dbb6be8f32946534335203d64b99aa26 not found: ID does not exist" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.640553 4662 scope.go:117] "RemoveContainer" containerID="4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.646045 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.646071 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5dw5\" (UniqueName: \"kubernetes.io/projected/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-kube-api-access-z5dw5\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.646082 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae27a002-7b9f-48d6-afa3-c5682dfee2f1-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.648552 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.660166 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.666377 4662 scope.go:117] "RemoveContainer" containerID="4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" Dec 08 09:33:59 crc kubenswrapper[4662]: E1208 09:33:59.668441 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d\": container with ID starting with 4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d not found: ID does not exist" containerID="4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.668571 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d"} err="failed to get container status \"4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d\": rpc error: code = NotFound desc = could not find container \"4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d\": container with ID starting with 4bf0676ef83019e21ee9b8e885f05a4c74937d8dcedb7918ca69f3640dc4f63d not found: ID does not exist" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.670288 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: E1208 09:33:59.670687 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-api" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.670761 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-api" Dec 08 09:33:59 crc kubenswrapper[4662]: E1208 09:33:59.670849 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-log" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.670899 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-log" Dec 08 09:33:59 crc kubenswrapper[4662]: E1208 09:33:59.670950 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae27a002-7b9f-48d6-afa3-c5682dfee2f1" containerName="nova-scheduler-scheduler" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.671004 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae27a002-7b9f-48d6-afa3-c5682dfee2f1" containerName="nova-scheduler-scheduler" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.671217 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae27a002-7b9f-48d6-afa3-c5682dfee2f1" containerName="nova-scheduler-scheduler" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.671350 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-api" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.671411 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" containerName="nova-api-log" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.672050 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.676240 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.715886 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.736937 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.747662 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djz2z\" (UniqueName: \"kubernetes.io/projected/c4101dc0-834a-401a-93da-ac0c8db6e9d5-kube-api-access-djz2z\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.747836 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-config-data\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.747903 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.747680 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.766453 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.768417 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.770183 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.787224 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.849799 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djz2z\" (UniqueName: \"kubernetes.io/projected/c4101dc0-834a-401a-93da-ac0c8db6e9d5-kube-api-access-djz2z\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.849937 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e6f51df-70aa-40f0-a95c-fe71805cae56-logs\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.850004 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.850048 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpxgl\" (UniqueName: \"kubernetes.io/projected/2e6f51df-70aa-40f0-a95c-fe71805cae56-kube-api-access-rpxgl\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.850150 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-config-data\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.850222 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-config-data\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.850248 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.854131 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.855197 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-config-data\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.869180 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djz2z\" (UniqueName: \"kubernetes.io/projected/c4101dc0-834a-401a-93da-ac0c8db6e9d5-kube-api-access-djz2z\") pod \"nova-scheduler-0\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " pod="openstack/nova-scheduler-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.951988 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.952038 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpxgl\" (UniqueName: \"kubernetes.io/projected/2e6f51df-70aa-40f0-a95c-fe71805cae56-kube-api-access-rpxgl\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.952123 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-config-data\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.952193 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e6f51df-70aa-40f0-a95c-fe71805cae56-logs\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.952542 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e6f51df-70aa-40f0-a95c-fe71805cae56-logs\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.956152 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-config-data\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.956223 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:33:59 crc kubenswrapper[4662]: I1208 09:33:59.967462 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpxgl\" (UniqueName: \"kubernetes.io/projected/2e6f51df-70aa-40f0-a95c-fe71805cae56-kube-api-access-rpxgl\") pod \"nova-api-0\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " pod="openstack/nova-api-0" Dec 08 09:34:00 crc kubenswrapper[4662]: I1208 09:34:00.007328 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:34:00 crc kubenswrapper[4662]: I1208 09:34:00.099871 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:00 crc kubenswrapper[4662]: W1208 09:34:00.519400 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc4101dc0_834a_401a_93da_ac0c8db6e9d5.slice/crio-1089c1b9bf8959229c530e5caf9178a7e94cf5783d42bbc2ed40869e7d8aa97b WatchSource:0}: Error finding container 1089c1b9bf8959229c530e5caf9178a7e94cf5783d42bbc2ed40869e7d8aa97b: Status 404 returned error can't find the container with id 1089c1b9bf8959229c530e5caf9178a7e94cf5783d42bbc2ed40869e7d8aa97b Dec 08 09:34:00 crc kubenswrapper[4662]: I1208 09:34:00.519536 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:34:00 crc kubenswrapper[4662]: I1208 09:34:00.689091 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4101dc0-834a-401a-93da-ac0c8db6e9d5","Type":"ContainerStarted","Data":"1089c1b9bf8959229c530e5caf9178a7e94cf5783d42bbc2ed40869e7d8aa97b"} Dec 08 09:34:00 crc kubenswrapper[4662]: I1208 09:34:00.716192 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02568702-19da-4124-bbab-bb1e3cf80e48" path="/var/lib/kubelet/pods/02568702-19da-4124-bbab-bb1e3cf80e48/volumes" Dec 08 09:34:00 crc kubenswrapper[4662]: I1208 09:34:00.716906 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae27a002-7b9f-48d6-afa3-c5682dfee2f1" path="/var/lib/kubelet/pods/ae27a002-7b9f-48d6-afa3-c5682dfee2f1/volumes" Dec 08 09:34:01 crc kubenswrapper[4662]: I1208 09:34:01.365220 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:01 crc kubenswrapper[4662]: I1208 09:34:01.707088 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e6f51df-70aa-40f0-a95c-fe71805cae56","Type":"ContainerStarted","Data":"d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be"} Dec 08 09:34:01 crc kubenswrapper[4662]: I1208 09:34:01.707465 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e6f51df-70aa-40f0-a95c-fe71805cae56","Type":"ContainerStarted","Data":"799f9791938c79bb899f2bafc5cb6cddba14e0a0da0dfd83e40500a55c84100b"} Dec 08 09:34:01 crc kubenswrapper[4662]: I1208 09:34:01.710381 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4101dc0-834a-401a-93da-ac0c8db6e9d5","Type":"ContainerStarted","Data":"d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff"} Dec 08 09:34:02 crc kubenswrapper[4662]: I1208 09:34:02.741911 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e6f51df-70aa-40f0-a95c-fe71805cae56","Type":"ContainerStarted","Data":"b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e"} Dec 08 09:34:02 crc kubenswrapper[4662]: I1208 09:34:02.771117 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.771102251 podStartE2EDuration="3.771102251s" podCreationTimestamp="2025-12-08 09:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:02.76805293 +0000 UTC m=+1166.337080920" watchObservedRunningTime="2025-12-08 09:34:02.771102251 +0000 UTC m=+1166.340130231" Dec 08 09:34:02 crc kubenswrapper[4662]: I1208 09:34:02.771487 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.771483062 podStartE2EDuration="3.771483062s" podCreationTimestamp="2025-12-08 09:33:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:01.727786621 +0000 UTC m=+1165.296814611" watchObservedRunningTime="2025-12-08 09:34:02.771483062 +0000 UTC m=+1166.340511052" Dec 08 09:34:03 crc kubenswrapper[4662]: I1208 09:34:03.019250 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 08 09:34:03 crc kubenswrapper[4662]: I1208 09:34:03.020165 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 08 09:34:05 crc kubenswrapper[4662]: I1208 09:34:05.006425 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 08 09:34:06 crc kubenswrapper[4662]: I1208 09:34:06.989380 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Dec 08 09:34:08 crc kubenswrapper[4662]: I1208 09:34:08.019364 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 08 09:34:08 crc kubenswrapper[4662]: I1208 09:34:08.019681 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 08 09:34:09 crc kubenswrapper[4662]: I1208 09:34:09.030984 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:09 crc kubenswrapper[4662]: I1208 09:34:09.031018 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.179:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:10 crc kubenswrapper[4662]: I1208 09:34:10.006156 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 08 09:34:10 crc kubenswrapper[4662]: I1208 09:34:10.032103 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 08 09:34:10 crc kubenswrapper[4662]: I1208 09:34:10.105880 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:34:10 crc kubenswrapper[4662]: I1208 09:34:10.107100 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:34:10 crc kubenswrapper[4662]: I1208 09:34:10.837498 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 08 09:34:11 crc kubenswrapper[4662]: I1208 09:34:11.188927 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:11 crc kubenswrapper[4662]: I1208 09:34:11.191711 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.181:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:12 crc kubenswrapper[4662]: I1208 09:34:12.660113 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.179037 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.180722 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="05001747-2338-4f97-8f09-68b9541f94e8" containerName="kube-state-metrics" containerID="cri-o://646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b" gracePeriod=30 Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.621515 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.664866 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sxwm\" (UniqueName: \"kubernetes.io/projected/05001747-2338-4f97-8f09-68b9541f94e8-kube-api-access-6sxwm\") pod \"05001747-2338-4f97-8f09-68b9541f94e8\" (UID: \"05001747-2338-4f97-8f09-68b9541f94e8\") " Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.684464 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05001747-2338-4f97-8f09-68b9541f94e8-kube-api-access-6sxwm" (OuterVolumeSpecName: "kube-api-access-6sxwm") pod "05001747-2338-4f97-8f09-68b9541f94e8" (UID: "05001747-2338-4f97-8f09-68b9541f94e8"). InnerVolumeSpecName "kube-api-access-6sxwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.768752 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sxwm\" (UniqueName: \"kubernetes.io/projected/05001747-2338-4f97-8f09-68b9541f94e8-kube-api-access-6sxwm\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.852330 4662 generic.go:334] "Generic (PLEG): container finished" podID="05001747-2338-4f97-8f09-68b9541f94e8" containerID="646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b" exitCode=2 Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.852377 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.852377 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05001747-2338-4f97-8f09-68b9541f94e8","Type":"ContainerDied","Data":"646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b"} Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.852522 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"05001747-2338-4f97-8f09-68b9541f94e8","Type":"ContainerDied","Data":"11d5cefe74c6a0927dc5ff96853924aa6365f8e4edfe96cd7f9ce2917d3850ba"} Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.852553 4662 scope.go:117] "RemoveContainer" containerID="646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.916774 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.934009 4662 scope.go:117] "RemoveContainer" containerID="646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b" Dec 08 09:34:15 crc kubenswrapper[4662]: E1208 09:34:15.935061 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b\": container with ID starting with 646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b not found: ID does not exist" containerID="646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.935137 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b"} err="failed to get container status \"646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b\": rpc error: code = NotFound desc = could not find container \"646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b\": container with ID starting with 646fd75bb951d06753c6f28ef1acaee65b35462078ef37b1a8a620c11861436b not found: ID does not exist" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.945414 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.959809 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:34:15 crc kubenswrapper[4662]: E1208 09:34:15.960243 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05001747-2338-4f97-8f09-68b9541f94e8" containerName="kube-state-metrics" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.960269 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="05001747-2338-4f97-8f09-68b9541f94e8" containerName="kube-state-metrics" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.960498 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="05001747-2338-4f97-8f09-68b9541f94e8" containerName="kube-state-metrics" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.961179 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.964041 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.964114 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Dec 08 09:34:15 crc kubenswrapper[4662]: I1208 09:34:15.971456 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.002880 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btzzm\" (UniqueName: \"kubernetes.io/projected/6a6f2271-26f9-4108-b71c-539f915674f9-kube-api-access-btzzm\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.002933 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.003037 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.003115 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.104553 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.104846 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.104961 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btzzm\" (UniqueName: \"kubernetes.io/projected/6a6f2271-26f9-4108-b71c-539f915674f9-kube-api-access-btzzm\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.105001 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.110834 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.111145 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.111336 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a6f2271-26f9-4108-b71c-539f915674f9-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.128531 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btzzm\" (UniqueName: \"kubernetes.io/projected/6a6f2271-26f9-4108-b71c-539f915674f9-kube-api-access-btzzm\") pod \"kube-state-metrics-0\" (UID: \"6a6f2271-26f9-4108-b71c-539f915674f9\") " pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.296819 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.307349 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.307677 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-central-agent" containerID="cri-o://07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91" gracePeriod=30 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.308200 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="proxy-httpd" containerID="cri-o://8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773" gracePeriod=30 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.308261 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="sg-core" containerID="cri-o://031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf" gracePeriod=30 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.308328 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-notification-agent" containerID="cri-o://e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648" gracePeriod=30 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.712384 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05001747-2338-4f97-8f09-68b9541f94e8" path="/var/lib/kubelet/pods/05001747-2338-4f97-8f09-68b9541f94e8/volumes" Dec 08 09:34:16 crc kubenswrapper[4662]: E1208 09:34:16.818141 4662 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c6f7f6c_bee3_4a59_83a7_b45b28becfa3.slice/crio-07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c6f7f6c_bee3_4a59_83a7_b45b28becfa3.slice/crio-conmon-07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91.scope\": RecentStats: unable to find data in memory cache]" Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.844960 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Dec 08 09:34:16 crc kubenswrapper[4662]: W1208 09:34:16.849075 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a6f2271_26f9_4108_b71c_539f915674f9.slice/crio-d5ba0e81f547f2b3ff9e3a829d40629f239530ce8e1fe4ae2c6ff57914ca6c45 WatchSource:0}: Error finding container d5ba0e81f547f2b3ff9e3a829d40629f239530ce8e1fe4ae2c6ff57914ca6c45: Status 404 returned error can't find the container with id d5ba0e81f547f2b3ff9e3a829d40629f239530ce8e1fe4ae2c6ff57914ca6c45 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.851801 4662 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.865515 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a6f2271-26f9-4108-b71c-539f915674f9","Type":"ContainerStarted","Data":"d5ba0e81f547f2b3ff9e3a829d40629f239530ce8e1fe4ae2c6ff57914ca6c45"} Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.870007 4662 generic.go:334] "Generic (PLEG): container finished" podID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerID="8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773" exitCode=0 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.870036 4662 generic.go:334] "Generic (PLEG): container finished" podID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerID="031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf" exitCode=2 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.870045 4662 generic.go:334] "Generic (PLEG): container finished" podID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerID="07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91" exitCode=0 Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.870051 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerDied","Data":"8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773"} Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.870117 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerDied","Data":"031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf"} Dec 08 09:34:16 crc kubenswrapper[4662]: I1208 09:34:16.870135 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerDied","Data":"07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91"} Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.826263 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.878019 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6a6f2271-26f9-4108-b71c-539f915674f9","Type":"ContainerStarted","Data":"92ab3062355c71332d212790d22f2d2f470a1d6d85c64c1a92a0fb3c74d8b878"} Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.878773 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.880681 4662 generic.go:334] "Generic (PLEG): container finished" podID="cb04d115-0144-400c-a1de-68d8f357c395" containerID="141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d" exitCode=137 Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.880720 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.880729 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb04d115-0144-400c-a1de-68d8f357c395","Type":"ContainerDied","Data":"141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d"} Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.880770 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"cb04d115-0144-400c-a1de-68d8f357c395","Type":"ContainerDied","Data":"e4fc2d278211d69c36e75a5812e88dc85ba1be95add0e17064dea287aa96b6f7"} Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.880789 4662 scope.go:117] "RemoveContainer" containerID="141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.902059 4662 scope.go:117] "RemoveContainer" containerID="141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d" Dec 08 09:34:17 crc kubenswrapper[4662]: E1208 09:34:17.902591 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d\": container with ID starting with 141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d not found: ID does not exist" containerID="141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.902635 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d"} err="failed to get container status \"141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d\": rpc error: code = NotFound desc = could not find container \"141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d\": container with ID starting with 141336f14537d7c057a92a0aef39c8e30eae137f7966d68ae95ae3c4f62c735d not found: ID does not exist" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.920933 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.488942251 podStartE2EDuration="2.920915309s" podCreationTimestamp="2025-12-08 09:34:15 +0000 UTC" firstStartedPulling="2025-12-08 09:34:16.851595678 +0000 UTC m=+1180.420623668" lastFinishedPulling="2025-12-08 09:34:17.283568696 +0000 UTC m=+1180.852596726" observedRunningTime="2025-12-08 09:34:17.90112061 +0000 UTC m=+1181.470148600" watchObservedRunningTime="2025-12-08 09:34:17.920915309 +0000 UTC m=+1181.489943299" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.949753 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-config-data\") pod \"cb04d115-0144-400c-a1de-68d8f357c395\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.950035 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-combined-ca-bundle\") pod \"cb04d115-0144-400c-a1de-68d8f357c395\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.950094 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf787\" (UniqueName: \"kubernetes.io/projected/cb04d115-0144-400c-a1de-68d8f357c395-kube-api-access-hf787\") pod \"cb04d115-0144-400c-a1de-68d8f357c395\" (UID: \"cb04d115-0144-400c-a1de-68d8f357c395\") " Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.955091 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb04d115-0144-400c-a1de-68d8f357c395-kube-api-access-hf787" (OuterVolumeSpecName: "kube-api-access-hf787") pod "cb04d115-0144-400c-a1de-68d8f357c395" (UID: "cb04d115-0144-400c-a1de-68d8f357c395"). InnerVolumeSpecName "kube-api-access-hf787". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.975047 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-config-data" (OuterVolumeSpecName: "config-data") pod "cb04d115-0144-400c-a1de-68d8f357c395" (UID: "cb04d115-0144-400c-a1de-68d8f357c395"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:17 crc kubenswrapper[4662]: I1208 09:34:17.983214 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb04d115-0144-400c-a1de-68d8f357c395" (UID: "cb04d115-0144-400c-a1de-68d8f357c395"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.025812 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.026653 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.031097 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.052514 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.052621 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf787\" (UniqueName: \"kubernetes.io/projected/cb04d115-0144-400c-a1de-68d8f357c395-kube-api-access-hf787\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.053053 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb04d115-0144-400c-a1de-68d8f357c395-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.273658 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.285490 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.299561 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:34:18 crc kubenswrapper[4662]: E1208 09:34:18.300095 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb04d115-0144-400c-a1de-68d8f357c395" containerName="nova-cell1-novncproxy-novncproxy" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.300120 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb04d115-0144-400c-a1de-68d8f357c395" containerName="nova-cell1-novncproxy-novncproxy" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.300370 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb04d115-0144-400c-a1de-68d8f357c395" containerName="nova-cell1-novncproxy-novncproxy" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.301184 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.310949 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.311274 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.311515 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.312006 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.463308 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.463420 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.463468 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2njs\" (UniqueName: \"kubernetes.io/projected/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-kube-api-access-c2njs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.463505 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.463839 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.565416 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.565514 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2njs\" (UniqueName: \"kubernetes.io/projected/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-kube-api-access-c2njs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.565562 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.565608 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.565639 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.570843 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.571280 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.571456 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.573105 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.583458 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2njs\" (UniqueName: \"kubernetes.io/projected/8251fb8d-8798-4373-a7ae-2336eb6dc2d3-kube-api-access-c2njs\") pod \"nova-cell1-novncproxy-0\" (UID: \"8251fb8d-8798-4373-a7ae-2336eb6dc2d3\") " pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.652089 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.718759 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb04d115-0144-400c-a1de-68d8f357c395" path="/var/lib/kubelet/pods/cb04d115-0144-400c-a1de-68d8f357c395/volumes" Dec 08 09:34:18 crc kubenswrapper[4662]: I1208 09:34:18.903018 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 08 09:34:19 crc kubenswrapper[4662]: I1208 09:34:19.144410 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Dec 08 09:34:19 crc kubenswrapper[4662]: I1208 09:34:19.914080 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8251fb8d-8798-4373-a7ae-2336eb6dc2d3","Type":"ContainerStarted","Data":"b36d4332024f1eaf9c7905c8f3092a88faae264a19a4961b3004bb158ac57dd8"} Dec 08 09:34:19 crc kubenswrapper[4662]: I1208 09:34:19.914515 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"8251fb8d-8798-4373-a7ae-2336eb6dc2d3","Type":"ContainerStarted","Data":"472fa433d8f191b585238e6432db33ea8c6f0878ecd3cee7ecb96efa8baccce9"} Dec 08 09:34:19 crc kubenswrapper[4662]: I1208 09:34:19.936067 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=1.936044335 podStartE2EDuration="1.936044335s" podCreationTimestamp="2025-12-08 09:34:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:19.931494314 +0000 UTC m=+1183.500522324" watchObservedRunningTime="2025-12-08 09:34:19.936044335 +0000 UTC m=+1183.505072345" Dec 08 09:34:20 crc kubenswrapper[4662]: I1208 09:34:20.110255 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 08 09:34:20 crc kubenswrapper[4662]: I1208 09:34:20.110865 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 08 09:34:20 crc kubenswrapper[4662]: I1208 09:34:20.111704 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 08 09:34:20 crc kubenswrapper[4662]: I1208 09:34:20.116641 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 08 09:34:20 crc kubenswrapper[4662]: I1208 09:34:20.924762 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 08 09:34:20 crc kubenswrapper[4662]: I1208 09:34:20.929349 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.126190 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-hzhtj"] Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.127590 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.150892 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-hzhtj"] Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.229073 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-config\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.229238 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pzd7\" (UniqueName: \"kubernetes.io/projected/d732ac3b-c886-4109-b09e-780d6fd5b6f7-kube-api-access-9pzd7\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.229261 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.229300 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.229338 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.330760 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.330802 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.330905 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-config\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.330939 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pzd7\" (UniqueName: \"kubernetes.io/projected/d732ac3b-c886-4109-b09e-780d6fd5b6f7-kube-api-access-9pzd7\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.330953 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.331705 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-sb\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.332355 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-nb\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.332934 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-dns-svc\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.333483 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-config\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.384766 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pzd7\" (UniqueName: \"kubernetes.io/projected/d732ac3b-c886-4109-b09e-780d6fd5b6f7-kube-api-access-9pzd7\") pod \"dnsmasq-dns-68d4b6d797-hzhtj\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.459185 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.879590 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.958914 4662 generic.go:334] "Generic (PLEG): container finished" podID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerID="e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648" exitCode=0 Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.959592 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.959784 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerDied","Data":"e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648"} Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.959812 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3","Type":"ContainerDied","Data":"fd24a4f5b3c4e3a79bc3fb4314d531bc6bd596ef4a4c57daa0839c1f9f2073ef"} Dec 08 09:34:21 crc kubenswrapper[4662]: I1208 09:34:21.959829 4662 scope.go:117] "RemoveContainer" containerID="8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.015482 4662 scope.go:117] "RemoveContainer" containerID="031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.050203 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-combined-ca-bundle\") pod \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.050296 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-scripts\") pod \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.050373 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-log-httpd\") pod \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.050393 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-config-data\") pod \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.050423 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-sg-core-conf-yaml\") pod \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.050454 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-run-httpd\") pod \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.050471 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp4tt\" (UniqueName: \"kubernetes.io/projected/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-kube-api-access-cp4tt\") pod \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\" (UID: \"5c6f7f6c-bee3-4a59-83a7-b45b28becfa3\") " Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.053322 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" (UID: "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.054830 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" (UID: "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.062564 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-hzhtj"] Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.082159 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-scripts" (OuterVolumeSpecName: "scripts") pod "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" (UID: "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.085534 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-kube-api-access-cp4tt" (OuterVolumeSpecName: "kube-api-access-cp4tt") pod "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" (UID: "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3"). InnerVolumeSpecName "kube-api-access-cp4tt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.090523 4662 scope.go:117] "RemoveContainer" containerID="e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.094415 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" (UID: "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:22 crc kubenswrapper[4662]: W1208 09:34:22.103846 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd732ac3b_c886_4109_b09e_780d6fd5b6f7.slice/crio-86fc8170084981b3269deaf82e0de6f4d28e4d17b1c3d460ac39883a62f737c3 WatchSource:0}: Error finding container 86fc8170084981b3269deaf82e0de6f4d28e4d17b1c3d460ac39883a62f737c3: Status 404 returned error can't find the container with id 86fc8170084981b3269deaf82e0de6f4d28e4d17b1c3d460ac39883a62f737c3 Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.146827 4662 scope.go:117] "RemoveContainer" containerID="07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.154527 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.154550 4662 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.154560 4662 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.154568 4662 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.154577 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp4tt\" (UniqueName: \"kubernetes.io/projected/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-kube-api-access-cp4tt\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.174984 4662 scope.go:117] "RemoveContainer" containerID="8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773" Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.175452 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773\": container with ID starting with 8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773 not found: ID does not exist" containerID="8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.175483 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773"} err="failed to get container status \"8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773\": rpc error: code = NotFound desc = could not find container \"8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773\": container with ID starting with 8f69340893ac5ddc9e20e3d2e9de5486ce9ad9ac2645d2c9ac90d4d56e6f4773 not found: ID does not exist" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.175510 4662 scope.go:117] "RemoveContainer" containerID="031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf" Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.175734 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf\": container with ID starting with 031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf not found: ID does not exist" containerID="031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.175770 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf"} err="failed to get container status \"031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf\": rpc error: code = NotFound desc = could not find container \"031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf\": container with ID starting with 031ffb4847981bac0be9b1257a76f66dc69f4c480af9b54456ee4def305551bf not found: ID does not exist" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.175785 4662 scope.go:117] "RemoveContainer" containerID="e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648" Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.176120 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648\": container with ID starting with e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648 not found: ID does not exist" containerID="e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.176155 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648"} err="failed to get container status \"e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648\": rpc error: code = NotFound desc = could not find container \"e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648\": container with ID starting with e1425045c10dddede8a4ca2a88b1531006e9af67bf5185b02945001ece22f648 not found: ID does not exist" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.176172 4662 scope.go:117] "RemoveContainer" containerID="07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91" Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.176411 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91\": container with ID starting with 07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91 not found: ID does not exist" containerID="07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.176429 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91"} err="failed to get container status \"07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91\": rpc error: code = NotFound desc = could not find container \"07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91\": container with ID starting with 07d6fccc8af78bfec0c5118492ce4ea8d21465b1be46ea16049ed36dd691da91 not found: ID does not exist" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.195642 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" (UID: "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.206964 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-config-data" (OuterVolumeSpecName: "config-data") pod "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" (UID: "5c6f7f6c-bee3-4a59-83a7-b45b28becfa3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.255821 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.255859 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.297096 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.315397 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.325509 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.326005 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="sg-core" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326028 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="sg-core" Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.326048 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-central-agent" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326056 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-central-agent" Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.326069 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-notification-agent" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326077 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-notification-agent" Dec 08 09:34:22 crc kubenswrapper[4662]: E1208 09:34:22.326098 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="proxy-httpd" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326106 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="proxy-httpd" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326342 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-central-agent" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326360 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="proxy-httpd" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326377 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="sg-core" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.326393 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" containerName="ceilometer-notification-agent" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.328544 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.335058 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.335195 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.336950 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.354067 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.463755 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.463860 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-config-data\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.463957 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.464139 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-log-httpd\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.464208 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.464273 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-scripts\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.464321 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-run-httpd\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.464358 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6sbx\" (UniqueName: \"kubernetes.io/projected/958321e4-b44d-4fb2-9b2d-099e20273e33-kube-api-access-g6sbx\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.565938 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.565993 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-log-httpd\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.566020 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.566046 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-scripts\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.566071 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-run-httpd\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.566089 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6sbx\" (UniqueName: \"kubernetes.io/projected/958321e4-b44d-4fb2-9b2d-099e20273e33-kube-api-access-g6sbx\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.566119 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.566156 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-config-data\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.566862 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-log-httpd\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.567007 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-run-httpd\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.571464 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-config-data\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.571565 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.571578 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.572335 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.572548 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-scripts\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.588312 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6sbx\" (UniqueName: \"kubernetes.io/projected/958321e4-b44d-4fb2-9b2d-099e20273e33-kube-api-access-g6sbx\") pod \"ceilometer-0\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.649614 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.717688 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c6f7f6c-bee3-4a59-83a7-b45b28becfa3" path="/var/lib/kubelet/pods/5c6f7f6c-bee3-4a59-83a7-b45b28becfa3/volumes" Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.991987 4662 generic.go:334] "Generic (PLEG): container finished" podID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerID="e21225caa0c4b50feb2638681a3859e3d80cc066758d00d42d1705b0db97a704" exitCode=0 Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.992035 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" event={"ID":"d732ac3b-c886-4109-b09e-780d6fd5b6f7","Type":"ContainerDied","Data":"e21225caa0c4b50feb2638681a3859e3d80cc066758d00d42d1705b0db97a704"} Dec 08 09:34:22 crc kubenswrapper[4662]: I1208 09:34:22.992096 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" event={"ID":"d732ac3b-c886-4109-b09e-780d6fd5b6f7","Type":"ContainerStarted","Data":"86fc8170084981b3269deaf82e0de6f4d28e4d17b1c3d460ac39883a62f737c3"} Dec 08 09:34:23 crc kubenswrapper[4662]: I1208 09:34:23.177931 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:23 crc kubenswrapper[4662]: W1208 09:34:23.184344 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod958321e4_b44d_4fb2_9b2d_099e20273e33.slice/crio-076337e7b22ed7f19dadbfff8d1ab91e2ed84e4f42d707f2ec4f426efa338e4e WatchSource:0}: Error finding container 076337e7b22ed7f19dadbfff8d1ab91e2ed84e4f42d707f2ec4f426efa338e4e: Status 404 returned error can't find the container with id 076337e7b22ed7f19dadbfff8d1ab91e2ed84e4f42d707f2ec4f426efa338e4e Dec 08 09:34:23 crc kubenswrapper[4662]: I1208 09:34:23.611010 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:23 crc kubenswrapper[4662]: I1208 09:34:23.653150 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.005440 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" event={"ID":"d732ac3b-c886-4109-b09e-780d6fd5b6f7","Type":"ContainerStarted","Data":"3c4eb2d32bcdfbdda0723477705c114936207d29f03ac567476f431c382202f3"} Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.006386 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.007858 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-log" containerID="cri-o://d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be" gracePeriod=30 Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.008046 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerStarted","Data":"039a951af1f03e92b2616a9ac64919e774d706a1bbcac5b7367cbb14100e94ac"} Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.008125 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerStarted","Data":"076337e7b22ed7f19dadbfff8d1ab91e2ed84e4f42d707f2ec4f426efa338e4e"} Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.008230 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-api" containerID="cri-o://b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e" gracePeriod=30 Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.029211 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" podStartSLOduration=3.029190336 podStartE2EDuration="3.029190336s" podCreationTimestamp="2025-12-08 09:34:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:24.027420109 +0000 UTC m=+1187.596448109" watchObservedRunningTime="2025-12-08 09:34:24.029190336 +0000 UTC m=+1187.598218326" Dec 08 09:34:24 crc kubenswrapper[4662]: I1208 09:34:24.140565 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:25 crc kubenswrapper[4662]: I1208 09:34:25.023787 4662 generic.go:334] "Generic (PLEG): container finished" podID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerID="d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be" exitCode=143 Dec 08 09:34:25 crc kubenswrapper[4662]: I1208 09:34:25.023856 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e6f51df-70aa-40f0-a95c-fe71805cae56","Type":"ContainerDied","Data":"d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be"} Dec 08 09:34:25 crc kubenswrapper[4662]: I1208 09:34:25.030760 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerStarted","Data":"8e3a9b2b1a4c11b5fcb9516f336bb99eb0fa1cd23c921642a9fd84c43ba6c145"} Dec 08 09:34:26 crc kubenswrapper[4662]: I1208 09:34:26.047641 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerStarted","Data":"5d99a2dd61b00d8d6630b9f41c5bf0bbeb5d9ae3e9879670d63602b4ec4ac0a1"} Dec 08 09:34:26 crc kubenswrapper[4662]: I1208 09:34:26.315074 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.058152 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerStarted","Data":"8249edb7f21da267c9582c71b36a6ef87e083ec5b8d75e953e4b38f18f7ec684"} Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.058364 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-central-agent" containerID="cri-o://039a951af1f03e92b2616a9ac64919e774d706a1bbcac5b7367cbb14100e94ac" gracePeriod=30 Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.058589 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.058608 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-notification-agent" containerID="cri-o://8e3a9b2b1a4c11b5fcb9516f336bb99eb0fa1cd23c921642a9fd84c43ba6c145" gracePeriod=30 Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.058786 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="proxy-httpd" containerID="cri-o://8249edb7f21da267c9582c71b36a6ef87e083ec5b8d75e953e4b38f18f7ec684" gracePeriod=30 Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.060267 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="sg-core" containerID="cri-o://5d99a2dd61b00d8d6630b9f41c5bf0bbeb5d9ae3e9879670d63602b4ec4ac0a1" gracePeriod=30 Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.772513 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.789939 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.380679585 podStartE2EDuration="5.789920898s" podCreationTimestamp="2025-12-08 09:34:22 +0000 UTC" firstStartedPulling="2025-12-08 09:34:23.186725523 +0000 UTC m=+1186.755753513" lastFinishedPulling="2025-12-08 09:34:26.595966836 +0000 UTC m=+1190.164994826" observedRunningTime="2025-12-08 09:34:27.085942064 +0000 UTC m=+1190.654970074" watchObservedRunningTime="2025-12-08 09:34:27.789920898 +0000 UTC m=+1191.358948898" Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.963602 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpxgl\" (UniqueName: \"kubernetes.io/projected/2e6f51df-70aa-40f0-a95c-fe71805cae56-kube-api-access-rpxgl\") pod \"2e6f51df-70aa-40f0-a95c-fe71805cae56\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.963849 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-config-data\") pod \"2e6f51df-70aa-40f0-a95c-fe71805cae56\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.963911 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e6f51df-70aa-40f0-a95c-fe71805cae56-logs\") pod \"2e6f51df-70aa-40f0-a95c-fe71805cae56\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.963946 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-combined-ca-bundle\") pod \"2e6f51df-70aa-40f0-a95c-fe71805cae56\" (UID: \"2e6f51df-70aa-40f0-a95c-fe71805cae56\") " Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.967691 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e6f51df-70aa-40f0-a95c-fe71805cae56-logs" (OuterVolumeSpecName: "logs") pod "2e6f51df-70aa-40f0-a95c-fe71805cae56" (UID: "2e6f51df-70aa-40f0-a95c-fe71805cae56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:34:27 crc kubenswrapper[4662]: I1208 09:34:27.971134 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e6f51df-70aa-40f0-a95c-fe71805cae56-kube-api-access-rpxgl" (OuterVolumeSpecName: "kube-api-access-rpxgl") pod "2e6f51df-70aa-40f0-a95c-fe71805cae56" (UID: "2e6f51df-70aa-40f0-a95c-fe71805cae56"). InnerVolumeSpecName "kube-api-access-rpxgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.010977 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2e6f51df-70aa-40f0-a95c-fe71805cae56" (UID: "2e6f51df-70aa-40f0-a95c-fe71805cae56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.018793 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-config-data" (OuterVolumeSpecName: "config-data") pod "2e6f51df-70aa-40f0-a95c-fe71805cae56" (UID: "2e6f51df-70aa-40f0-a95c-fe71805cae56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.066375 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2e6f51df-70aa-40f0-a95c-fe71805cae56-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.066411 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.066426 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpxgl\" (UniqueName: \"kubernetes.io/projected/2e6f51df-70aa-40f0-a95c-fe71805cae56-kube-api-access-rpxgl\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.066438 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2e6f51df-70aa-40f0-a95c-fe71805cae56-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.074675 4662 generic.go:334] "Generic (PLEG): container finished" podID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerID="8249edb7f21da267c9582c71b36a6ef87e083ec5b8d75e953e4b38f18f7ec684" exitCode=0 Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.074715 4662 generic.go:334] "Generic (PLEG): container finished" podID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerID="5d99a2dd61b00d8d6630b9f41c5bf0bbeb5d9ae3e9879670d63602b4ec4ac0a1" exitCode=2 Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.074724 4662 generic.go:334] "Generic (PLEG): container finished" podID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerID="8e3a9b2b1a4c11b5fcb9516f336bb99eb0fa1cd23c921642a9fd84c43ba6c145" exitCode=0 Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.074808 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerDied","Data":"8249edb7f21da267c9582c71b36a6ef87e083ec5b8d75e953e4b38f18f7ec684"} Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.074842 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerDied","Data":"5d99a2dd61b00d8d6630b9f41c5bf0bbeb5d9ae3e9879670d63602b4ec4ac0a1"} Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.074855 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerDied","Data":"8e3a9b2b1a4c11b5fcb9516f336bb99eb0fa1cd23c921642a9fd84c43ba6c145"} Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.077424 4662 generic.go:334] "Generic (PLEG): container finished" podID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerID="b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e" exitCode=0 Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.077464 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e6f51df-70aa-40f0-a95c-fe71805cae56","Type":"ContainerDied","Data":"b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e"} Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.077489 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"2e6f51df-70aa-40f0-a95c-fe71805cae56","Type":"ContainerDied","Data":"799f9791938c79bb899f2bafc5cb6cddba14e0a0da0dfd83e40500a55c84100b"} Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.077527 4662 scope.go:117] "RemoveContainer" containerID="b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.077575 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.097680 4662 scope.go:117] "RemoveContainer" containerID="d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.117195 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.120469 4662 scope.go:117] "RemoveContainer" containerID="b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e" Dec 08 09:34:28 crc kubenswrapper[4662]: E1208 09:34:28.121409 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e\": container with ID starting with b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e not found: ID does not exist" containerID="b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.121470 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e"} err="failed to get container status \"b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e\": rpc error: code = NotFound desc = could not find container \"b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e\": container with ID starting with b862a69a7423649012662bfd3baf6d8156dccbb386ac90ee3b2f6ba2aa0d202e not found: ID does not exist" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.121497 4662 scope.go:117] "RemoveContainer" containerID="d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be" Dec 08 09:34:28 crc kubenswrapper[4662]: E1208 09:34:28.121907 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be\": container with ID starting with d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be not found: ID does not exist" containerID="d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.121960 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be"} err="failed to get container status \"d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be\": rpc error: code = NotFound desc = could not find container \"d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be\": container with ID starting with d97318ea964204e656d716af20081072ddebcc75a3b1a002d78881c9625d65be not found: ID does not exist" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.134137 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.143037 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:28 crc kubenswrapper[4662]: E1208 09:34:28.143641 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-api" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.143754 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-api" Dec 08 09:34:28 crc kubenswrapper[4662]: E1208 09:34:28.143812 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-log" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.143862 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-log" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.144086 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-log" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.144168 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" containerName="nova-api-api" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.145149 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.148854 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.150383 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.150543 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.163159 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.269542 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.269862 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.269948 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.270085 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dvmt\" (UniqueName: \"kubernetes.io/projected/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-kube-api-access-7dvmt\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.270173 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-logs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.270244 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-config-data\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.372247 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.372299 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.372325 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.372399 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7dvmt\" (UniqueName: \"kubernetes.io/projected/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-kube-api-access-7dvmt\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.372437 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-logs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.372465 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-config-data\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.377341 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-logs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.379782 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-config-data\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.380250 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.381067 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.381413 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.393345 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7dvmt\" (UniqueName: \"kubernetes.io/projected/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-kube-api-access-7dvmt\") pod \"nova-api-0\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.460099 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.656992 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.694768 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.719901 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e6f51df-70aa-40f0-a95c-fe71805cae56" path="/var/lib/kubelet/pods/2e6f51df-70aa-40f0-a95c-fe71805cae56/volumes" Dec 08 09:34:28 crc kubenswrapper[4662]: I1208 09:34:28.916365 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.102418 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2d275a3-56c3-4244-b4fc-86162cf0bfb2","Type":"ContainerStarted","Data":"8cdf9bfb14073b3ea627b56c8e80e68cabcf8953c27cdc0206cae1d4eead7815"} Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.118424 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.311564 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-fp672"] Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.312804 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.315730 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.316493 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.320245 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fp672"] Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.496066 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5w6q9\" (UniqueName: \"kubernetes.io/projected/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-kube-api-access-5w6q9\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.496388 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-scripts\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.496430 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.496459 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-config-data\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.598261 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5w6q9\" (UniqueName: \"kubernetes.io/projected/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-kube-api-access-5w6q9\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.598357 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-scripts\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.598401 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.598427 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-config-data\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.602546 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-scripts\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.606650 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.611340 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-config-data\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.639481 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5w6q9\" (UniqueName: \"kubernetes.io/projected/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-kube-api-access-5w6q9\") pod \"nova-cell1-cell-mapping-fp672\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:29 crc kubenswrapper[4662]: I1208 09:34:29.641981 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.113364 4662 generic.go:334] "Generic (PLEG): container finished" podID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerID="039a951af1f03e92b2616a9ac64919e774d706a1bbcac5b7367cbb14100e94ac" exitCode=0 Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.113688 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerDied","Data":"039a951af1f03e92b2616a9ac64919e774d706a1bbcac5b7367cbb14100e94ac"} Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.117125 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2d275a3-56c3-4244-b4fc-86162cf0bfb2","Type":"ContainerStarted","Data":"9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd"} Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.117155 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2d275a3-56c3-4244-b4fc-86162cf0bfb2","Type":"ContainerStarted","Data":"736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2"} Dec 08 09:34:30 crc kubenswrapper[4662]: W1208 09:34:30.148410 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ceb80a9_e524_4d98_87dd_ccd850c4b1ce.slice/crio-5c63ee03f92d8cda38503c6121a1bacb6d3d10978c3c351a95a8cb07564333d2 WatchSource:0}: Error finding container 5c63ee03f92d8cda38503c6121a1bacb6d3d10978c3c351a95a8cb07564333d2: Status 404 returned error can't find the container with id 5c63ee03f92d8cda38503c6121a1bacb6d3d10978c3c351a95a8cb07564333d2 Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.150990 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-fp672"] Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.156554 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.156536991 podStartE2EDuration="2.156536991s" podCreationTimestamp="2025-12-08 09:34:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:30.14749543 +0000 UTC m=+1193.716523420" watchObservedRunningTime="2025-12-08 09:34:30.156536991 +0000 UTC m=+1193.725564981" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.192630 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312273 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-ceilometer-tls-certs\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312329 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-scripts\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312352 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-run-httpd\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312369 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-log-httpd\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312391 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-sg-core-conf-yaml\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312452 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-combined-ca-bundle\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312471 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-config-data\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.312511 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6sbx\" (UniqueName: \"kubernetes.io/projected/958321e4-b44d-4fb2-9b2d-099e20273e33-kube-api-access-g6sbx\") pod \"958321e4-b44d-4fb2-9b2d-099e20273e33\" (UID: \"958321e4-b44d-4fb2-9b2d-099e20273e33\") " Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.314137 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.315277 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.318470 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-scripts" (OuterVolumeSpecName: "scripts") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.319196 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/958321e4-b44d-4fb2-9b2d-099e20273e33-kube-api-access-g6sbx" (OuterVolumeSpecName: "kube-api-access-g6sbx") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "kube-api-access-g6sbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.346555 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.370841 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.388778 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.416041 4662 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.416385 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.416400 4662 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-run-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.416412 4662 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/958321e4-b44d-4fb2-9b2d-099e20273e33-log-httpd\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.416424 4662 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.416437 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.416449 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6sbx\" (UniqueName: \"kubernetes.io/projected/958321e4-b44d-4fb2-9b2d-099e20273e33-kube-api-access-g6sbx\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.421022 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-config-data" (OuterVolumeSpecName: "config-data") pod "958321e4-b44d-4fb2-9b2d-099e20273e33" (UID: "958321e4-b44d-4fb2-9b2d-099e20273e33"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:30 crc kubenswrapper[4662]: I1208 09:34:30.518538 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/958321e4-b44d-4fb2-9b2d-099e20273e33-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.128094 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fp672" event={"ID":"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce","Type":"ContainerStarted","Data":"21a9d9ef05bbc53b027c9da0065ab413149dce48eb928a0f0ac748a97fd15e4c"} Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.128139 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fp672" event={"ID":"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce","Type":"ContainerStarted","Data":"5c63ee03f92d8cda38503c6121a1bacb6d3d10978c3c351a95a8cb07564333d2"} Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.139248 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.139289 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"958321e4-b44d-4fb2-9b2d-099e20273e33","Type":"ContainerDied","Data":"076337e7b22ed7f19dadbfff8d1ab91e2ed84e4f42d707f2ec4f426efa338e4e"} Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.139325 4662 scope.go:117] "RemoveContainer" containerID="8249edb7f21da267c9582c71b36a6ef87e083ec5b8d75e953e4b38f18f7ec684" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.149375 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-fp672" podStartSLOduration=2.149361231 podStartE2EDuration="2.149361231s" podCreationTimestamp="2025-12-08 09:34:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:31.146135795 +0000 UTC m=+1194.715163785" watchObservedRunningTime="2025-12-08 09:34:31.149361231 +0000 UTC m=+1194.718389221" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.168378 4662 scope.go:117] "RemoveContainer" containerID="5d99a2dd61b00d8d6630b9f41c5bf0bbeb5d9ae3e9879670d63602b4ec4ac0a1" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.176805 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.200669 4662 scope.go:117] "RemoveContainer" containerID="8e3a9b2b1a4c11b5fcb9516f336bb99eb0fa1cd23c921642a9fd84c43ba6c145" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.203651 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.223872 4662 scope.go:117] "RemoveContainer" containerID="039a951af1f03e92b2616a9ac64919e774d706a1bbcac5b7367cbb14100e94ac" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.230393 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:31 crc kubenswrapper[4662]: E1208 09:34:31.231062 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="proxy-httpd" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.231181 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="proxy-httpd" Dec 08 09:34:31 crc kubenswrapper[4662]: E1208 09:34:31.231268 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="sg-core" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.231333 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="sg-core" Dec 08 09:34:31 crc kubenswrapper[4662]: E1208 09:34:31.231411 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-notification-agent" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.231496 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-notification-agent" Dec 08 09:34:31 crc kubenswrapper[4662]: E1208 09:34:31.231571 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-central-agent" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.231635 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-central-agent" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.239926 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-central-agent" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.240145 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="ceilometer-notification-agent" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.240258 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="proxy-httpd" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.240346 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" containerName="sg-core" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.242261 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.243528 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.258429 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.258456 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.258494 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432017 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432091 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngtlp\" (UniqueName: \"kubernetes.io/projected/792a4482-03d6-4850-a692-26fa0269fadf-kube-api-access-ngtlp\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432194 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792a4482-03d6-4850-a692-26fa0269fadf-log-httpd\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432280 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-scripts\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432339 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432448 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-config-data\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432517 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.432548 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792a4482-03d6-4850-a692-26fa0269fadf-run-httpd\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.460903 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.522433 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-km2bx"] Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.528226 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" podUID="44fba630-a42b-4233-a201-95137e220c54" containerName="dnsmasq-dns" containerID="cri-o://8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75" gracePeriod=10 Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538051 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngtlp\" (UniqueName: \"kubernetes.io/projected/792a4482-03d6-4850-a692-26fa0269fadf-kube-api-access-ngtlp\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538112 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792a4482-03d6-4850-a692-26fa0269fadf-log-httpd\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538159 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-scripts\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538195 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538224 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-config-data\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538304 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538330 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792a4482-03d6-4850-a692-26fa0269fadf-run-httpd\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.538401 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.541072 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792a4482-03d6-4850-a692-26fa0269fadf-log-httpd\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.542344 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/792a4482-03d6-4850-a692-26fa0269fadf-run-httpd\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.557779 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.563018 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-config-data\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.563585 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.568495 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngtlp\" (UniqueName: \"kubernetes.io/projected/792a4482-03d6-4850-a692-26fa0269fadf-kube-api-access-ngtlp\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.570761 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.607320 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/792a4482-03d6-4850-a692-26fa0269fadf-scripts\") pod \"ceilometer-0\" (UID: \"792a4482-03d6-4850-a692-26fa0269fadf\") " pod="openstack/ceilometer-0" Dec 08 09:34:31 crc kubenswrapper[4662]: I1208 09:34:31.880434 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.050247 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.157394 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-dns-svc\") pod \"44fba630-a42b-4233-a201-95137e220c54\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.157847 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qd5md\" (UniqueName: \"kubernetes.io/projected/44fba630-a42b-4233-a201-95137e220c54-kube-api-access-qd5md\") pod \"44fba630-a42b-4233-a201-95137e220c54\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.157896 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-config\") pod \"44fba630-a42b-4233-a201-95137e220c54\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.157946 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-sb\") pod \"44fba630-a42b-4233-a201-95137e220c54\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.158008 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-nb\") pod \"44fba630-a42b-4233-a201-95137e220c54\" (UID: \"44fba630-a42b-4233-a201-95137e220c54\") " Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.173466 4662 generic.go:334] "Generic (PLEG): container finished" podID="44fba630-a42b-4233-a201-95137e220c54" containerID="8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75" exitCode=0 Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.174371 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.174444 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" event={"ID":"44fba630-a42b-4233-a201-95137e220c54","Type":"ContainerDied","Data":"8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75"} Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.174477 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" event={"ID":"44fba630-a42b-4233-a201-95137e220c54","Type":"ContainerDied","Data":"4ead46675fa403c0ce6db97af9f098b8085ffd9da7b0b0fe415094bffc1d8761"} Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.174497 4662 scope.go:117] "RemoveContainer" containerID="8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.188007 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44fba630-a42b-4233-a201-95137e220c54-kube-api-access-qd5md" (OuterVolumeSpecName: "kube-api-access-qd5md") pod "44fba630-a42b-4233-a201-95137e220c54" (UID: "44fba630-a42b-4233-a201-95137e220c54"). InnerVolumeSpecName "kube-api-access-qd5md". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.237835 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44fba630-a42b-4233-a201-95137e220c54" (UID: "44fba630-a42b-4233-a201-95137e220c54"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.239196 4662 scope.go:117] "RemoveContainer" containerID="478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.243425 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-config" (OuterVolumeSpecName: "config") pod "44fba630-a42b-4233-a201-95137e220c54" (UID: "44fba630-a42b-4233-a201-95137e220c54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.258531 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44fba630-a42b-4233-a201-95137e220c54" (UID: "44fba630-a42b-4233-a201-95137e220c54"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.260088 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.260115 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.260126 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qd5md\" (UniqueName: \"kubernetes.io/projected/44fba630-a42b-4233-a201-95137e220c54-kube-api-access-qd5md\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.260139 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.263548 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44fba630-a42b-4233-a201-95137e220c54" (UID: "44fba630-a42b-4233-a201-95137e220c54"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.274351 4662 scope.go:117] "RemoveContainer" containerID="8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75" Dec 08 09:34:32 crc kubenswrapper[4662]: E1208 09:34:32.274777 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75\": container with ID starting with 8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75 not found: ID does not exist" containerID="8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.274804 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75"} err="failed to get container status \"8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75\": rpc error: code = NotFound desc = could not find container \"8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75\": container with ID starting with 8c9863244a219958aeccdb1fbcd67c44e69dff4c7a6c4614fbfeed716166bd75 not found: ID does not exist" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.274823 4662 scope.go:117] "RemoveContainer" containerID="478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130" Dec 08 09:34:32 crc kubenswrapper[4662]: E1208 09:34:32.275036 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130\": container with ID starting with 478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130 not found: ID does not exist" containerID="478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.275057 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130"} err="failed to get container status \"478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130\": rpc error: code = NotFound desc = could not find container \"478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130\": container with ID starting with 478c949c2a15f02bdbaf1054a4ff8cbb30a71a078eaae14e40bc85798bc50130 not found: ID does not exist" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.361238 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44fba630-a42b-4233-a201-95137e220c54-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.489797 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Dec 08 09:34:32 crc kubenswrapper[4662]: W1208 09:34:32.502604 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod792a4482_03d6_4850_a692_26fa0269fadf.slice/crio-fdc41fe8ac384006d13ece60317321473350077df826f7607870a55bd9b3db6c WatchSource:0}: Error finding container fdc41fe8ac384006d13ece60317321473350077df826f7607870a55bd9b3db6c: Status 404 returned error can't find the container with id fdc41fe8ac384006d13ece60317321473350077df826f7607870a55bd9b3db6c Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.514234 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-km2bx"] Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.540848 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8b8cf6657-km2bx"] Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.712345 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44fba630-a42b-4233-a201-95137e220c54" path="/var/lib/kubelet/pods/44fba630-a42b-4233-a201-95137e220c54/volumes" Dec 08 09:34:32 crc kubenswrapper[4662]: I1208 09:34:32.713605 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="958321e4-b44d-4fb2-9b2d-099e20273e33" path="/var/lib/kubelet/pods/958321e4-b44d-4fb2-9b2d-099e20273e33/volumes" Dec 08 09:34:33 crc kubenswrapper[4662]: I1208 09:34:33.186477 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"792a4482-03d6-4850-a692-26fa0269fadf","Type":"ContainerStarted","Data":"fdc41fe8ac384006d13ece60317321473350077df826f7607870a55bd9b3db6c"} Dec 08 09:34:34 crc kubenswrapper[4662]: I1208 09:34:34.216816 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"792a4482-03d6-4850-a692-26fa0269fadf","Type":"ContainerStarted","Data":"6190abbad73b772aba8cb86f73eac3c1cbfbffcd0fe56f1472a662808d81d28e"} Dec 08 09:34:34 crc kubenswrapper[4662]: I1208 09:34:34.217327 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"792a4482-03d6-4850-a692-26fa0269fadf","Type":"ContainerStarted","Data":"1186bab697a078c2e9d202d2333eeac22ddf6441814bcc2e459d386736f8d480"} Dec 08 09:34:35 crc kubenswrapper[4662]: I1208 09:34:35.230789 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"792a4482-03d6-4850-a692-26fa0269fadf","Type":"ContainerStarted","Data":"d7a933327f9ab7df1a4b1bbde99b849bbb2bc181b3d0662fe895a2e299f42c26"} Dec 08 09:34:36 crc kubenswrapper[4662]: I1208 09:34:36.246042 4662 generic.go:334] "Generic (PLEG): container finished" podID="3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" containerID="21a9d9ef05bbc53b027c9da0065ab413149dce48eb928a0f0ac748a97fd15e4c" exitCode=0 Dec 08 09:34:36 crc kubenswrapper[4662]: I1208 09:34:36.247343 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fp672" event={"ID":"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce","Type":"ContainerDied","Data":"21a9d9ef05bbc53b027c9da0065ab413149dce48eb928a0f0ac748a97fd15e4c"} Dec 08 09:34:36 crc kubenswrapper[4662]: I1208 09:34:36.258555 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"792a4482-03d6-4850-a692-26fa0269fadf","Type":"ContainerStarted","Data":"8d422ff4a9227a43a71224ae1b8537d8f4aef11938382fd098b5eb7ea6f82861"} Dec 08 09:34:36 crc kubenswrapper[4662]: I1208 09:34:36.259621 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Dec 08 09:34:36 crc kubenswrapper[4662]: I1208 09:34:36.294271 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.979667689 podStartE2EDuration="5.294250434s" podCreationTimestamp="2025-12-08 09:34:31 +0000 UTC" firstStartedPulling="2025-12-08 09:34:32.505697879 +0000 UTC m=+1196.074725869" lastFinishedPulling="2025-12-08 09:34:35.820280624 +0000 UTC m=+1199.389308614" observedRunningTime="2025-12-08 09:34:36.292840856 +0000 UTC m=+1199.861868856" watchObservedRunningTime="2025-12-08 09:34:36.294250434 +0000 UTC m=+1199.863278434" Dec 08 09:34:36 crc kubenswrapper[4662]: I1208 09:34:36.945686 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8b8cf6657-km2bx" podUID="44fba630-a42b-4233-a201-95137e220c54" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.173:5353: i/o timeout" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.682013 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.773120 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5w6q9\" (UniqueName: \"kubernetes.io/projected/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-kube-api-access-5w6q9\") pod \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.773827 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-config-data\") pod \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.773972 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-combined-ca-bundle\") pod \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.774056 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-scripts\") pod \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\" (UID: \"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce\") " Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.779562 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-kube-api-access-5w6q9" (OuterVolumeSpecName: "kube-api-access-5w6q9") pod "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" (UID: "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce"). InnerVolumeSpecName "kube-api-access-5w6q9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.794156 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-scripts" (OuterVolumeSpecName: "scripts") pod "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" (UID: "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.797238 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-config-data" (OuterVolumeSpecName: "config-data") pod "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" (UID: "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.804190 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" (UID: "3ceb80a9-e524-4d98-87dd-ccd850c4b1ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.876177 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5w6q9\" (UniqueName: \"kubernetes.io/projected/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-kube-api-access-5w6q9\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.876210 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.876220 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:37 crc kubenswrapper[4662]: I1208 09:34:37.876228 4662 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce-scripts\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.284587 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-fp672" Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.293163 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-fp672" event={"ID":"3ceb80a9-e524-4d98-87dd-ccd850c4b1ce","Type":"ContainerDied","Data":"5c63ee03f92d8cda38503c6121a1bacb6d3d10978c3c351a95a8cb07564333d2"} Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.293209 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c63ee03f92d8cda38503c6121a1bacb6d3d10978c3c351a95a8cb07564333d2" Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.460365 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.460934 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.470448 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.484973 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.485239 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="c4101dc0-834a-401a-93da-ac0c8db6e9d5" containerName="nova-scheduler-scheduler" containerID="cri-o://d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" gracePeriod=30 Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.538503 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.538764 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-log" containerID="cri-o://2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d" gracePeriod=30 Dec 08 09:34:38 crc kubenswrapper[4662]: I1208 09:34:38.538819 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-metadata" containerID="cri-o://ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73" gracePeriod=30 Dec 08 09:34:39 crc kubenswrapper[4662]: I1208 09:34:39.294810 4662 generic.go:334] "Generic (PLEG): container finished" podID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerID="2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d" exitCode=143 Dec 08 09:34:39 crc kubenswrapper[4662]: I1208 09:34:39.294900 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7","Type":"ContainerDied","Data":"2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d"} Dec 08 09:34:39 crc kubenswrapper[4662]: I1208 09:34:39.472882 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:39 crc kubenswrapper[4662]: I1208 09:34:39.472986 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.186:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.009729 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff is running failed: container process not found" containerID="d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.012370 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff is running failed: container process not found" containerID="d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.012622 4662 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff is running failed: container process not found" containerID="d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.012655 4662 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="c4101dc0-834a-401a-93da-ac0c8db6e9d5" containerName="nova-scheduler-scheduler" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.130027 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.221852 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djz2z\" (UniqueName: \"kubernetes.io/projected/c4101dc0-834a-401a-93da-ac0c8db6e9d5-kube-api-access-djz2z\") pod \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.222183 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-config-data\") pod \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.222247 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-combined-ca-bundle\") pod \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\" (UID: \"c4101dc0-834a-401a-93da-ac0c8db6e9d5\") " Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.232402 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4101dc0-834a-401a-93da-ac0c8db6e9d5-kube-api-access-djz2z" (OuterVolumeSpecName: "kube-api-access-djz2z") pod "c4101dc0-834a-401a-93da-ac0c8db6e9d5" (UID: "c4101dc0-834a-401a-93da-ac0c8db6e9d5"). InnerVolumeSpecName "kube-api-access-djz2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.256601 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-config-data" (OuterVolumeSpecName: "config-data") pod "c4101dc0-834a-401a-93da-ac0c8db6e9d5" (UID: "c4101dc0-834a-401a-93da-ac0c8db6e9d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.285678 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c4101dc0-834a-401a-93da-ac0c8db6e9d5" (UID: "c4101dc0-834a-401a-93da-ac0c8db6e9d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.306686 4662 generic.go:334] "Generic (PLEG): container finished" podID="c4101dc0-834a-401a-93da-ac0c8db6e9d5" containerID="d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" exitCode=0 Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.306961 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-log" containerID="cri-o://736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2" gracePeriod=30 Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.307287 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.309389 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-api" containerID="cri-o://9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd" gracePeriod=30 Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.309703 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4101dc0-834a-401a-93da-ac0c8db6e9d5","Type":"ContainerDied","Data":"d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff"} Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.309785 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"c4101dc0-834a-401a-93da-ac0c8db6e9d5","Type":"ContainerDied","Data":"1089c1b9bf8959229c530e5caf9178a7e94cf5783d42bbc2ed40869e7d8aa97b"} Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.309852 4662 scope.go:117] "RemoveContainer" containerID="d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.323820 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.323849 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c4101dc0-834a-401a-93da-ac0c8db6e9d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.323861 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djz2z\" (UniqueName: \"kubernetes.io/projected/c4101dc0-834a-401a-93da-ac0c8db6e9d5-kube-api-access-djz2z\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.334974 4662 scope.go:117] "RemoveContainer" containerID="d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.336987 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff\": container with ID starting with d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff not found: ID does not exist" containerID="d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.337026 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff"} err="failed to get container status \"d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff\": rpc error: code = NotFound desc = could not find container \"d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff\": container with ID starting with d39cd615ff98bf185f11594288cb0ef6a2fe00de5ed3322bf9ee59b8418521ff not found: ID does not exist" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.352458 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.361886 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.376597 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.377053 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" containerName="nova-manage" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.377076 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" containerName="nova-manage" Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.377093 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4101dc0-834a-401a-93da-ac0c8db6e9d5" containerName="nova-scheduler-scheduler" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.377099 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4101dc0-834a-401a-93da-ac0c8db6e9d5" containerName="nova-scheduler-scheduler" Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.377120 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44fba630-a42b-4233-a201-95137e220c54" containerName="init" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.377126 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="44fba630-a42b-4233-a201-95137e220c54" containerName="init" Dec 08 09:34:40 crc kubenswrapper[4662]: E1208 09:34:40.377140 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44fba630-a42b-4233-a201-95137e220c54" containerName="dnsmasq-dns" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.377145 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="44fba630-a42b-4233-a201-95137e220c54" containerName="dnsmasq-dns" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.377316 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="44fba630-a42b-4233-a201-95137e220c54" containerName="dnsmasq-dns" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.377338 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4101dc0-834a-401a-93da-ac0c8db6e9d5" containerName="nova-scheduler-scheduler" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.377356 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" containerName="nova-manage" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.378080 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.380075 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.403045 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.426474 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76792\" (UniqueName: \"kubernetes.io/projected/e9acfb80-5f9e-4340-9681-95d7c325bfd2-kube-api-access-76792\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.426566 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acfb80-5f9e-4340-9681-95d7c325bfd2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.426596 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9acfb80-5f9e-4340-9681-95d7c325bfd2-config-data\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.528186 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76792\" (UniqueName: \"kubernetes.io/projected/e9acfb80-5f9e-4340-9681-95d7c325bfd2-kube-api-access-76792\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.528308 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acfb80-5f9e-4340-9681-95d7c325bfd2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.528344 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9acfb80-5f9e-4340-9681-95d7c325bfd2-config-data\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.532493 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e9acfb80-5f9e-4340-9681-95d7c325bfd2-config-data\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.533060 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e9acfb80-5f9e-4340-9681-95d7c325bfd2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.545872 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76792\" (UniqueName: \"kubernetes.io/projected/e9acfb80-5f9e-4340-9681-95d7c325bfd2-kube-api-access-76792\") pod \"nova-scheduler-0\" (UID: \"e9acfb80-5f9e-4340-9681-95d7c325bfd2\") " pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.699258 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Dec 08 09:34:40 crc kubenswrapper[4662]: I1208 09:34:40.729114 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4101dc0-834a-401a-93da-ac0c8db6e9d5" path="/var/lib/kubelet/pods/c4101dc0-834a-401a-93da-ac0c8db6e9d5/volumes" Dec 08 09:34:41 crc kubenswrapper[4662]: I1208 09:34:41.256167 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Dec 08 09:34:41 crc kubenswrapper[4662]: I1208 09:34:41.346117 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9acfb80-5f9e-4340-9681-95d7c325bfd2","Type":"ContainerStarted","Data":"b452ec03df14a74b173d0cfa0cb9a9e7e6cdc9c7c8d7a555449f2cccf7a43e7e"} Dec 08 09:34:41 crc kubenswrapper[4662]: I1208 09:34:41.357544 4662 generic.go:334] "Generic (PLEG): container finished" podID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerID="736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2" exitCode=143 Dec 08 09:34:41 crc kubenswrapper[4662]: I1208 09:34:41.357592 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2d275a3-56c3-4244-b4fc-86162cf0bfb2","Type":"ContainerDied","Data":"736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2"} Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.160261 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.270664 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-config-data\") pod \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.270875 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-nova-metadata-tls-certs\") pod \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.271016 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-logs\") pod \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.271132 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fvl2d\" (UniqueName: \"kubernetes.io/projected/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-kube-api-access-fvl2d\") pod \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.271191 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-combined-ca-bundle\") pod \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\" (UID: \"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7\") " Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.271849 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-logs" (OuterVolumeSpecName: "logs") pod "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" (UID: "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.272500 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.280237 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-kube-api-access-fvl2d" (OuterVolumeSpecName: "kube-api-access-fvl2d") pod "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" (UID: "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7"). InnerVolumeSpecName "kube-api-access-fvl2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.316904 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" (UID: "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.337896 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-config-data" (OuterVolumeSpecName: "config-data") pod "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" (UID: "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.379596 4662 generic.go:334] "Generic (PLEG): container finished" podID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerID="ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73" exitCode=0 Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.379689 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7","Type":"ContainerDied","Data":"ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73"} Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.379725 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7","Type":"ContainerDied","Data":"a2c99578a102dabbcce3f4bfaada96444685aa334e41fbba0b8be537d7fec043"} Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.380228 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fvl2d\" (UniqueName: \"kubernetes.io/projected/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-kube-api-access-fvl2d\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.380245 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.380255 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.383993 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.384801 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"e9acfb80-5f9e-4340-9681-95d7c325bfd2","Type":"ContainerStarted","Data":"c5bbc4cabc5882171a327020e7ea289a8e8e9348cde317962ed4ae31250d1431"} Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.385056 4662 scope.go:117] "RemoveContainer" containerID="ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.411831 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.411808589 podStartE2EDuration="2.411808589s" podCreationTimestamp="2025-12-08 09:34:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:42.403467886 +0000 UTC m=+1205.972495886" watchObservedRunningTime="2025-12-08 09:34:42.411808589 +0000 UTC m=+1205.980836599" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.434190 4662 scope.go:117] "RemoveContainer" containerID="2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.434270 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" (UID: "b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.456673 4662 scope.go:117] "RemoveContainer" containerID="ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73" Dec 08 09:34:42 crc kubenswrapper[4662]: E1208 09:34:42.457141 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73\": container with ID starting with ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73 not found: ID does not exist" containerID="ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.457187 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73"} err="failed to get container status \"ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73\": rpc error: code = NotFound desc = could not find container \"ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73\": container with ID starting with ba05d17d026233b18c7f41133fb7bdd65ea80c89c3c5bc340f5f6a5701376d73 not found: ID does not exist" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.457213 4662 scope.go:117] "RemoveContainer" containerID="2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d" Dec 08 09:34:42 crc kubenswrapper[4662]: E1208 09:34:42.457470 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d\": container with ID starting with 2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d not found: ID does not exist" containerID="2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.457500 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d"} err="failed to get container status \"2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d\": rpc error: code = NotFound desc = could not find container \"2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d\": container with ID starting with 2ee82f20610b3aa3557a2a243d7215f149eca9e05cbdacb71d520e5e4f16627d not found: ID does not exist" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.483409 4662 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.787247 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.797504 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.820455 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:34:42 crc kubenswrapper[4662]: E1208 09:34:42.820980 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-log" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.821005 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-log" Dec 08 09:34:42 crc kubenswrapper[4662]: E1208 09:34:42.821042 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-metadata" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.821051 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-metadata" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.821289 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-log" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.821331 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" containerName="nova-metadata-metadata" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.822520 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.825277 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.826851 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.837575 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.889565 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411bcfde-bb2d-4274-a486-42d84f76e1c2-logs\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.889617 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-config-data\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.889655 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.889864 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js9s7\" (UniqueName: \"kubernetes.io/projected/411bcfde-bb2d-4274-a486-42d84f76e1c2-kube-api-access-js9s7\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.889921 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.992980 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.993100 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411bcfde-bb2d-4274-a486-42d84f76e1c2-logs\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.993129 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-config-data\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.993162 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.993277 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js9s7\" (UniqueName: \"kubernetes.io/projected/411bcfde-bb2d-4274-a486-42d84f76e1c2-kube-api-access-js9s7\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.993572 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411bcfde-bb2d-4274-a486-42d84f76e1c2-logs\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.996620 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:42 crc kubenswrapper[4662]: I1208 09:34:42.996807 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-config-data\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:43 crc kubenswrapper[4662]: I1208 09:34:43.003266 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/411bcfde-bb2d-4274-a486-42d84f76e1c2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:43 crc kubenswrapper[4662]: I1208 09:34:43.017109 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js9s7\" (UniqueName: \"kubernetes.io/projected/411bcfde-bb2d-4274-a486-42d84f76e1c2-kube-api-access-js9s7\") pod \"nova-metadata-0\" (UID: \"411bcfde-bb2d-4274-a486-42d84f76e1c2\") " pod="openstack/nova-metadata-0" Dec 08 09:34:43 crc kubenswrapper[4662]: I1208 09:34:43.196087 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Dec 08 09:34:43 crc kubenswrapper[4662]: I1208 09:34:43.649179 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Dec 08 09:34:44 crc kubenswrapper[4662]: I1208 09:34:44.403614 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"411bcfde-bb2d-4274-a486-42d84f76e1c2","Type":"ContainerStarted","Data":"acc265ca5daa06d0139a4f6e0ce8721af1b59a40e7b4b9e10fa08f2fd79fe3d4"} Dec 08 09:34:44 crc kubenswrapper[4662]: I1208 09:34:44.404166 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"411bcfde-bb2d-4274-a486-42d84f76e1c2","Type":"ContainerStarted","Data":"70fae9bc1d9d24e0137a2878240725855a29e5f1317e0053e2ebc90c38cba91d"} Dec 08 09:34:44 crc kubenswrapper[4662]: I1208 09:34:44.404178 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"411bcfde-bb2d-4274-a486-42d84f76e1c2","Type":"ContainerStarted","Data":"4a26f7a99a90381afa4ad0f5795e2d77b8332032be03ce8e85145d236c472940"} Dec 08 09:34:44 crc kubenswrapper[4662]: I1208 09:34:44.429646 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.429624436 podStartE2EDuration="2.429624436s" podCreationTimestamp="2025-12-08 09:34:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:44.426134593 +0000 UTC m=+1207.995162583" watchObservedRunningTime="2025-12-08 09:34:44.429624436 +0000 UTC m=+1207.998652446" Dec 08 09:34:44 crc kubenswrapper[4662]: I1208 09:34:44.710975 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7" path="/var/lib/kubelet/pods/b58a11d8-dde7-49c5-a1d6-78d6e5f7e4e7/volumes" Dec 08 09:34:45 crc kubenswrapper[4662]: I1208 09:34:45.699947 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.192905 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.252652 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-config-data\") pod \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.252880 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-logs\") pod \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.252920 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-combined-ca-bundle\") pod \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.253526 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-logs" (OuterVolumeSpecName: "logs") pod "a2d275a3-56c3-4244-b4fc-86162cf0bfb2" (UID: "a2d275a3-56c3-4244-b4fc-86162cf0bfb2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.253585 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs\") pod \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.253776 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-internal-tls-certs\") pod \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.253834 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7dvmt\" (UniqueName: \"kubernetes.io/projected/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-kube-api-access-7dvmt\") pod \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.254644 4662 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-logs\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.260711 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-kube-api-access-7dvmt" (OuterVolumeSpecName: "kube-api-access-7dvmt") pod "a2d275a3-56c3-4244-b4fc-86162cf0bfb2" (UID: "a2d275a3-56c3-4244-b4fc-86162cf0bfb2"). InnerVolumeSpecName "kube-api-access-7dvmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.294975 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a2d275a3-56c3-4244-b4fc-86162cf0bfb2" (UID: "a2d275a3-56c3-4244-b4fc-86162cf0bfb2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.300465 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-config-data" (OuterVolumeSpecName: "config-data") pod "a2d275a3-56c3-4244-b4fc-86162cf0bfb2" (UID: "a2d275a3-56c3-4244-b4fc-86162cf0bfb2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.329900 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a2d275a3-56c3-4244-b4fc-86162cf0bfb2" (UID: "a2d275a3-56c3-4244-b4fc-86162cf0bfb2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.355490 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a2d275a3-56c3-4244-b4fc-86162cf0bfb2" (UID: "a2d275a3-56c3-4244-b4fc-86162cf0bfb2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.356134 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs\") pod \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\" (UID: \"a2d275a3-56c3-4244-b4fc-86162cf0bfb2\") " Dec 08 09:34:46 crc kubenswrapper[4662]: W1208 09:34:46.356248 4662 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a2d275a3-56c3-4244-b4fc-86162cf0bfb2/volumes/kubernetes.io~secret/public-tls-certs Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.356269 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a2d275a3-56c3-4244-b4fc-86162cf0bfb2" (UID: "a2d275a3-56c3-4244-b4fc-86162cf0bfb2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.356840 4662 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.356956 4662 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.357051 4662 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.357112 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7dvmt\" (UniqueName: \"kubernetes.io/projected/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-kube-api-access-7dvmt\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.357170 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a2d275a3-56c3-4244-b4fc-86162cf0bfb2-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.427396 4662 generic.go:334] "Generic (PLEG): container finished" podID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerID="9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd" exitCode=0 Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.427437 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2d275a3-56c3-4244-b4fc-86162cf0bfb2","Type":"ContainerDied","Data":"9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd"} Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.427464 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a2d275a3-56c3-4244-b4fc-86162cf0bfb2","Type":"ContainerDied","Data":"8cdf9bfb14073b3ea627b56c8e80e68cabcf8953c27cdc0206cae1d4eead7815"} Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.427480 4662 scope.go:117] "RemoveContainer" containerID="9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.427483 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.451043 4662 scope.go:117] "RemoveContainer" containerID="736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.484695 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.491375 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.493551 4662 scope.go:117] "RemoveContainer" containerID="9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd" Dec 08 09:34:46 crc kubenswrapper[4662]: E1208 09:34:46.494417 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd\": container with ID starting with 9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd not found: ID does not exist" containerID="9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.494473 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd"} err="failed to get container status \"9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd\": rpc error: code = NotFound desc = could not find container \"9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd\": container with ID starting with 9e3c6b60ea90bc2ed07e2398e619d6c0e88a56622841ddd02a93cb4de26a47dd not found: ID does not exist" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.494502 4662 scope.go:117] "RemoveContainer" containerID="736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2" Dec 08 09:34:46 crc kubenswrapper[4662]: E1208 09:34:46.494910 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2\": container with ID starting with 736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2 not found: ID does not exist" containerID="736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.494957 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2"} err="failed to get container status \"736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2\": rpc error: code = NotFound desc = could not find container \"736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2\": container with ID starting with 736f2aac16ce1e815bb47453ab5fadd930630b1d394ce73d37bb554974ad18c2 not found: ID does not exist" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.503230 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:46 crc kubenswrapper[4662]: E1208 09:34:46.504050 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-log" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.504273 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-log" Dec 08 09:34:46 crc kubenswrapper[4662]: E1208 09:34:46.504443 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-api" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.504554 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-api" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.505090 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-api" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.505261 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" containerName="nova-api-log" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.507099 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.511037 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.531100 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.531929 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.532528 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.564367 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.564434 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vstkz\" (UniqueName: \"kubernetes.io/projected/84735737-6d94-4bae-8932-3651b52a2b37-kube-api-access-vstkz\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.564461 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-public-tls-certs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.564488 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84735737-6d94-4bae-8932-3651b52a2b37-logs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.564523 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-config-data\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.564555 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.666316 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-config-data\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.667289 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.667728 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.667802 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vstkz\" (UniqueName: \"kubernetes.io/projected/84735737-6d94-4bae-8932-3651b52a2b37-kube-api-access-vstkz\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.667869 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-public-tls-certs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.667957 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84735737-6d94-4bae-8932-3651b52a2b37-logs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.668664 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/84735737-6d94-4bae-8932-3651b52a2b37-logs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.671369 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-config-data\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.671541 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.672532 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-internal-tls-certs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.674285 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/84735737-6d94-4bae-8932-3651b52a2b37-public-tls-certs\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.687210 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vstkz\" (UniqueName: \"kubernetes.io/projected/84735737-6d94-4bae-8932-3651b52a2b37-kube-api-access-vstkz\") pod \"nova-api-0\" (UID: \"84735737-6d94-4bae-8932-3651b52a2b37\") " pod="openstack/nova-api-0" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.733332 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2d275a3-56c3-4244-b4fc-86162cf0bfb2" path="/var/lib/kubelet/pods/a2d275a3-56c3-4244-b4fc-86162cf0bfb2/volumes" Dec 08 09:34:46 crc kubenswrapper[4662]: I1208 09:34:46.851113 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Dec 08 09:34:47 crc kubenswrapper[4662]: I1208 09:34:47.360917 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Dec 08 09:34:47 crc kubenswrapper[4662]: I1208 09:34:47.436493 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84735737-6d94-4bae-8932-3651b52a2b37","Type":"ContainerStarted","Data":"f29029e1eb7402bfd6c190abc20e8c27b70156d5da09bb14f8ed9db5ca2accff"} Dec 08 09:34:48 crc kubenswrapper[4662]: I1208 09:34:48.197508 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 08 09:34:48 crc kubenswrapper[4662]: I1208 09:34:48.197854 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Dec 08 09:34:48 crc kubenswrapper[4662]: I1208 09:34:48.460151 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84735737-6d94-4bae-8932-3651b52a2b37","Type":"ContainerStarted","Data":"34daf908f190f8d3a1567c5b82ff900f4a1f5658012e7b77abbc90a301d040bd"} Dec 08 09:34:48 crc kubenswrapper[4662]: I1208 09:34:48.460422 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"84735737-6d94-4bae-8932-3651b52a2b37","Type":"ContainerStarted","Data":"b3757477aa5de2ab2376d087f0ac283984905b8fbaec666d7e79b2f0287bd3c5"} Dec 08 09:34:48 crc kubenswrapper[4662]: I1208 09:34:48.488357 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.488336918 podStartE2EDuration="2.488336918s" podCreationTimestamp="2025-12-08 09:34:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:34:48.479936463 +0000 UTC m=+1212.048964473" watchObservedRunningTime="2025-12-08 09:34:48.488336918 +0000 UTC m=+1212.057364908" Dec 08 09:34:50 crc kubenswrapper[4662]: I1208 09:34:50.715319 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Dec 08 09:34:50 crc kubenswrapper[4662]: I1208 09:34:50.728879 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Dec 08 09:34:51 crc kubenswrapper[4662]: I1208 09:34:51.514231 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Dec 08 09:34:53 crc kubenswrapper[4662]: I1208 09:34:53.197247 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 08 09:34:53 crc kubenswrapper[4662]: I1208 09:34:53.197692 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Dec 08 09:34:54 crc kubenswrapper[4662]: I1208 09:34:54.208942 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="411bcfde-bb2d-4274-a486-42d84f76e1c2" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:54 crc kubenswrapper[4662]: I1208 09:34:54.208944 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="411bcfde-bb2d-4274-a486-42d84f76e1c2" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.190:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:56 crc kubenswrapper[4662]: I1208 09:34:56.851264 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:34:56 crc kubenswrapper[4662]: I1208 09:34:56.851589 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Dec 08 09:34:57 crc kubenswrapper[4662]: I1208 09:34:57.864021 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84735737-6d94-4bae-8932-3651b52a2b37" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:34:57 crc kubenswrapper[4662]: I1208 09:34:57.864022 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="84735737-6d94-4bae-8932-3651b52a2b37" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.191:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 08 09:35:01 crc kubenswrapper[4662]: I1208 09:35:01.892546 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Dec 08 09:35:03 crc kubenswrapper[4662]: I1208 09:35:03.204756 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 08 09:35:03 crc kubenswrapper[4662]: I1208 09:35:03.207014 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Dec 08 09:35:03 crc kubenswrapper[4662]: I1208 09:35:03.210848 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 08 09:35:03 crc kubenswrapper[4662]: I1208 09:35:03.599443 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Dec 08 09:35:06 crc kubenswrapper[4662]: I1208 09:35:06.861099 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 08 09:35:06 crc kubenswrapper[4662]: I1208 09:35:06.862899 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Dec 08 09:35:06 crc kubenswrapper[4662]: I1208 09:35:06.863032 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 08 09:35:06 crc kubenswrapper[4662]: I1208 09:35:06.869434 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 08 09:35:07 crc kubenswrapper[4662]: I1208 09:35:07.633786 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Dec 08 09:35:07 crc kubenswrapper[4662]: I1208 09:35:07.642385 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Dec 08 09:35:15 crc kubenswrapper[4662]: I1208 09:35:15.700058 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:35:16 crc kubenswrapper[4662]: I1208 09:35:16.524082 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:35:20 crc kubenswrapper[4662]: I1208 09:35:20.085865 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerName="rabbitmq" containerID="cri-o://49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60" gracePeriod=604796 Dec 08 09:35:20 crc kubenswrapper[4662]: I1208 09:35:20.886301 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="rabbitmq" containerID="cri-o://3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654" gracePeriod=604796 Dec 08 09:35:22 crc kubenswrapper[4662]: I1208 09:35:22.056894 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Dec 08 09:35:22 crc kubenswrapper[4662]: I1208 09:35:22.710341 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.100:5671: connect: connection refused" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.619504 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.714553 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9b3e5a2-0303-435d-9bd7-763b2f802e46-pod-info\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.714639 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xzx8\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-kube-api-access-6xzx8\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.714684 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-plugins-conf\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.714914 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-tls\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.714993 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-config-data\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715046 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-confd\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715077 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-server-conf\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715103 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9b3e5a2-0303-435d-9bd7-763b2f802e46-erlang-cookie-secret\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715155 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-plugins\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715237 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715269 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-erlang-cookie\") pod \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\" (UID: \"a9b3e5a2-0303-435d-9bd7-763b2f802e46\") " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715491 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.715768 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.716682 4662 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.716708 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.717473 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.721076 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-kube-api-access-6xzx8" (OuterVolumeSpecName: "kube-api-access-6xzx8") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "kube-api-access-6xzx8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.737395 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.737517 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9b3e5a2-0303-435d-9bd7-763b2f802e46-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.749101 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a9b3e5a2-0303-435d-9bd7-763b2f802e46-pod-info" (OuterVolumeSpecName: "pod-info") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.751337 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-config-data" (OuterVolumeSpecName: "config-data") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.767637 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "persistence") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.775999 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-server-conf" (OuterVolumeSpecName: "server-conf") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.805250 4662 generic.go:334] "Generic (PLEG): container finished" podID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerID="49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60" exitCode=0 Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.805358 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.811184 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9b3e5a2-0303-435d-9bd7-763b2f802e46","Type":"ContainerDied","Data":"49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60"} Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.811226 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a9b3e5a2-0303-435d-9bd7-763b2f802e46","Type":"ContainerDied","Data":"d660f1b002f72f91e7de2d42cfe5ba64bc29b76b0b495b011d2812a8fc84a135"} Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.811245 4662 scope.go:117] "RemoveContainer" containerID="49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820319 4662 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9b3e5a2-0303-435d-9bd7-763b2f802e46-pod-info\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820348 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xzx8\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-kube-api-access-6xzx8\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820357 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820365 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820374 4662 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9b3e5a2-0303-435d-9bd7-763b2f802e46-server-conf\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820382 4662 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9b3e5a2-0303-435d-9bd7-763b2f802e46-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820401 4662 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.820426 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.844550 4662 scope.go:117] "RemoveContainer" containerID="3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.845188 4662 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.863141 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a9b3e5a2-0303-435d-9bd7-763b2f802e46" (UID: "a9b3e5a2-0303-435d-9bd7-763b2f802e46"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.895530 4662 scope.go:117] "RemoveContainer" containerID="49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60" Dec 08 09:35:26 crc kubenswrapper[4662]: E1208 09:35:26.899196 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60\": container with ID starting with 49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60 not found: ID does not exist" containerID="49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.899237 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60"} err="failed to get container status \"49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60\": rpc error: code = NotFound desc = could not find container \"49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60\": container with ID starting with 49a6fe8fdff1f45a0d97977005220a2f650abb0f0dad367f5a753eba3c6ddb60 not found: ID does not exist" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.899260 4662 scope.go:117] "RemoveContainer" containerID="3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c" Dec 08 09:35:26 crc kubenswrapper[4662]: E1208 09:35:26.899608 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c\": container with ID starting with 3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c not found: ID does not exist" containerID="3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.899625 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c"} err="failed to get container status \"3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c\": rpc error: code = NotFound desc = could not find container \"3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c\": container with ID starting with 3db977c7e645fcbb05c718027cfc4b0e59bde908b54ba7f90c23dc8b495c398c not found: ID does not exist" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.921927 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9b3e5a2-0303-435d-9bd7-763b2f802e46-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:26 crc kubenswrapper[4662]: I1208 09:35:26.921957 4662 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.189185 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.210421 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.224803 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:35:27 crc kubenswrapper[4662]: E1208 09:35:27.225472 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerName="rabbitmq" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.225488 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerName="rabbitmq" Dec 08 09:35:27 crc kubenswrapper[4662]: E1208 09:35:27.225500 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerName="setup-container" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.225506 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerName="setup-container" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.225667 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" containerName="rabbitmq" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.228342 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.230731 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.230775 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.230893 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.231039 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.231083 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.231913 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.232033 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-mrd6c" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.243288 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329033 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzd9\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-kube-api-access-2fzd9\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329078 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329128 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329156 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329182 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329206 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329223 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329244 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329257 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329277 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.329299 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442214 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442302 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fzd9\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-kube-api-access-2fzd9\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442327 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442379 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442408 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442427 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442453 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442472 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442495 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442510 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.442533 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.443335 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.443465 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-server-conf\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.443853 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.444081 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-config-data\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.450494 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.450811 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.450859 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.458436 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-pod-info\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.458447 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.460594 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.486614 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fzd9\" (UniqueName: \"kubernetes.io/projected/a7866efc-4d7d-4d74-907b-e01dbdeaefaa-kube-api-access-2fzd9\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.501539 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"rabbitmq-server-0\" (UID: \"a7866efc-4d7d-4d74-907b-e01dbdeaefaa\") " pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.543795 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.565996 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649340 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-plugins-conf\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649710 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-confd\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649785 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9f9be7d-4423-489a-a794-e022a83c9e51-pod-info\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649816 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7vsj\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-kube-api-access-c7vsj\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649866 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-server-conf\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649914 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-tls\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649946 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9f9be7d-4423-489a-a794-e022a83c9e51-erlang-cookie-secret\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.649982 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-config-data\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.650017 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-plugins\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.650037 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.650096 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-erlang-cookie\") pod \"a9f9be7d-4423-489a-a794-e022a83c9e51\" (UID: \"a9f9be7d-4423-489a-a794-e022a83c9e51\") " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.650516 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.650811 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.651277 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.659402 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-kube-api-access-c7vsj" (OuterVolumeSpecName: "kube-api-access-c7vsj") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "kube-api-access-c7vsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.665823 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/a9f9be7d-4423-489a-a794-e022a83c9e51-pod-info" (OuterVolumeSpecName: "pod-info") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.671313 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9f9be7d-4423-489a-a794-e022a83c9e51-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.673355 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.673595 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "persistence") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.708492 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-config-data" (OuterVolumeSpecName: "config-data") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.751153 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-server-conf" (OuterVolumeSpecName: "server-conf") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752423 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752450 4662 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-plugins-conf\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752461 4662 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/a9f9be7d-4423-489a-a794-e022a83c9e51-pod-info\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752469 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c7vsj\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-kube-api-access-c7vsj\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752478 4662 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-server-conf\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752486 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752495 4662 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/a9f9be7d-4423-489a-a794-e022a83c9e51-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752505 4662 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a9f9be7d-4423-489a-a794-e022a83c9e51-config-data\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752513 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.752541 4662 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.780324 4662 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.823565 4662 generic.go:334] "Generic (PLEG): container finished" podID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerID="3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654" exitCode=0 Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.823604 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9f9be7d-4423-489a-a794-e022a83c9e51","Type":"ContainerDied","Data":"3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654"} Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.823630 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"a9f9be7d-4423-489a-a794-e022a83c9e51","Type":"ContainerDied","Data":"55959e8f26bf65f85d02094560b3ce9cb44bf02946cea6ca77b5d2ecd81b94e7"} Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.823647 4662 scope.go:117] "RemoveContainer" containerID="3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.823646 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.849882 4662 scope.go:117] "RemoveContainer" containerID="b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.854457 4662 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.858150 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "a9f9be7d-4423-489a-a794-e022a83c9e51" (UID: "a9f9be7d-4423-489a-a794-e022a83c9e51"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.871813 4662 scope.go:117] "RemoveContainer" containerID="3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654" Dec 08 09:35:27 crc kubenswrapper[4662]: E1208 09:35:27.872332 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654\": container with ID starting with 3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654 not found: ID does not exist" containerID="3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.872365 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654"} err="failed to get container status \"3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654\": rpc error: code = NotFound desc = could not find container \"3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654\": container with ID starting with 3c078a23a689d3dc14cf11648c2cdf25089e0b6c282671f9d87734f62fa8d654 not found: ID does not exist" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.872392 4662 scope.go:117] "RemoveContainer" containerID="b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881" Dec 08 09:35:27 crc kubenswrapper[4662]: E1208 09:35:27.872644 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881\": container with ID starting with b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881 not found: ID does not exist" containerID="b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.872706 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881"} err="failed to get container status \"b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881\": rpc error: code = NotFound desc = could not find container \"b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881\": container with ID starting with b8ac5381c79db83ab9ab8b43496f37076dea137993dfb7a5fa5eb4c5f5c01881 not found: ID does not exist" Dec 08 09:35:27 crc kubenswrapper[4662]: I1208 09:35:27.955793 4662 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/a9f9be7d-4423-489a-a794-e022a83c9e51-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.119564 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.172733 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.191809 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.198186 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:35:28 crc kubenswrapper[4662]: E1208 09:35:28.198645 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="setup-container" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.198661 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="setup-container" Dec 08 09:35:28 crc kubenswrapper[4662]: E1208 09:35:28.198676 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="rabbitmq" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.198684 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="rabbitmq" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.198887 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" containerName="rabbitmq" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.199817 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.203349 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.203855 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.205275 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.205404 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.210215 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.210430 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.210587 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-vqthz" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.222782 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.261729 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.261810 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e86949df-33b5-4ea8-86fc-d8a9ed982826-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.261873 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qpz9\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-kube-api-access-4qpz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.261961 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.262151 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.262183 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.262227 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.262259 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e86949df-33b5-4ea8-86fc-d8a9ed982826-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.262304 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.262327 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.262350 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363374 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363428 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e86949df-33b5-4ea8-86fc-d8a9ed982826-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363464 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363504 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363520 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363559 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363580 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e86949df-33b5-4ea8-86fc-d8a9ed982826-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363620 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qpz9\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-kube-api-access-4qpz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363705 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363765 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.363788 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.364778 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.365042 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.367299 4662 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.367872 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.368537 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/e86949df-33b5-4ea8-86fc-d8a9ed982826-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.368806 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.369538 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.370580 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/e86949df-33b5-4ea8-86fc-d8a9ed982826-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.370725 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/e86949df-33b5-4ea8-86fc-d8a9ed982826-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.371596 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.385297 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qpz9\" (UniqueName: \"kubernetes.io/projected/e86949df-33b5-4ea8-86fc-d8a9ed982826-kube-api-access-4qpz9\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.399429 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"e86949df-33b5-4ea8-86fc-d8a9ed982826\") " pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.553684 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.721634 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9b3e5a2-0303-435d-9bd7-763b2f802e46" path="/var/lib/kubelet/pods/a9b3e5a2-0303-435d-9bd7-763b2f802e46/volumes" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.722771 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f9be7d-4423-489a-a794-e022a83c9e51" path="/var/lib/kubelet/pods/a9f9be7d-4423-489a-a794-e022a83c9e51/volumes" Dec 08 09:35:28 crc kubenswrapper[4662]: I1208 09:35:28.863344 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7866efc-4d7d-4d74-907b-e01dbdeaefaa","Type":"ContainerStarted","Data":"440af3c59fb4f6594e9b2d57a69afa5f0588702daff48059f86e1fca3985c468"} Dec 08 09:35:29 crc kubenswrapper[4662]: I1208 09:35:29.044039 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Dec 08 09:35:29 crc kubenswrapper[4662]: W1208 09:35:29.078226 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode86949df_33b5_4ea8_86fc_d8a9ed982826.slice/crio-7155231ee3c6fcf8aad4e61d21453b7c1436676fc62d1ee4a09b46ed63b84006 WatchSource:0}: Error finding container 7155231ee3c6fcf8aad4e61d21453b7c1436676fc62d1ee4a09b46ed63b84006: Status 404 returned error can't find the container with id 7155231ee3c6fcf8aad4e61d21453b7c1436676fc62d1ee4a09b46ed63b84006 Dec 08 09:35:29 crc kubenswrapper[4662]: I1208 09:35:29.874825 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e86949df-33b5-4ea8-86fc-d8a9ed982826","Type":"ContainerStarted","Data":"7155231ee3c6fcf8aad4e61d21453b7c1436676fc62d1ee4a09b46ed63b84006"} Dec 08 09:35:29 crc kubenswrapper[4662]: I1208 09:35:29.876869 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7866efc-4d7d-4d74-907b-e01dbdeaefaa","Type":"ContainerStarted","Data":"c890cd870293c03937c60a744f2ccac1fbafe3fb17f8eb322a1a41fe711b376d"} Dec 08 09:35:30 crc kubenswrapper[4662]: I1208 09:35:30.885500 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e86949df-33b5-4ea8-86fc-d8a9ed982826","Type":"ContainerStarted","Data":"f1f3be6116b9a8d6377bf3457a5fcfd19c808f29babe44ad705ee5d9f12c4af6"} Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.390270 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zrg28"] Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.391776 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.393993 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.409390 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zrg28"] Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.431668 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smdlr\" (UniqueName: \"kubernetes.io/projected/3268d878-714d-4446-ad3d-5ee1a80db91d-kube-api-access-smdlr\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.431761 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.431820 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.431852 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.431904 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-dns-svc\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.431994 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-config\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.533862 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.533931 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.533954 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.533980 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-dns-svc\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.534015 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-config\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.534152 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smdlr\" (UniqueName: \"kubernetes.io/projected/3268d878-714d-4446-ad3d-5ee1a80db91d-kube-api-access-smdlr\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.535012 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-openstack-edpm-ipam\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.535167 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-nb\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.535216 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-dns-svc\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.535219 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-config\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.535764 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-sb\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.552990 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smdlr\" (UniqueName: \"kubernetes.io/projected/3268d878-714d-4446-ad3d-5ee1a80db91d-kube-api-access-smdlr\") pod \"dnsmasq-dns-578b8d767c-zrg28\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:31 crc kubenswrapper[4662]: I1208 09:35:31.713937 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:32 crc kubenswrapper[4662]: I1208 09:35:32.215143 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zrg28"] Dec 08 09:35:32 crc kubenswrapper[4662]: I1208 09:35:32.611299 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:35:32 crc kubenswrapper[4662]: I1208 09:35:32.611665 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:35:32 crc kubenswrapper[4662]: I1208 09:35:32.907481 4662 generic.go:334] "Generic (PLEG): container finished" podID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerID="e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9" exitCode=0 Dec 08 09:35:32 crc kubenswrapper[4662]: I1208 09:35:32.907527 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" event={"ID":"3268d878-714d-4446-ad3d-5ee1a80db91d","Type":"ContainerDied","Data":"e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9"} Dec 08 09:35:32 crc kubenswrapper[4662]: I1208 09:35:32.907551 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" event={"ID":"3268d878-714d-4446-ad3d-5ee1a80db91d","Type":"ContainerStarted","Data":"74918bf098fce3e231ef567ad2f34647447264325d7d8137ba3101e91c6054b6"} Dec 08 09:35:33 crc kubenswrapper[4662]: I1208 09:35:33.917414 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" event={"ID":"3268d878-714d-4446-ad3d-5ee1a80db91d","Type":"ContainerStarted","Data":"df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a"} Dec 08 09:35:33 crc kubenswrapper[4662]: I1208 09:35:33.917568 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:33 crc kubenswrapper[4662]: I1208 09:35:33.939873 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" podStartSLOduration=2.93985749 podStartE2EDuration="2.93985749s" podCreationTimestamp="2025-12-08 09:35:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:35:33.93835489 +0000 UTC m=+1257.507382880" watchObservedRunningTime="2025-12-08 09:35:33.93985749 +0000 UTC m=+1257.508885480" Dec 08 09:35:41 crc kubenswrapper[4662]: I1208 09:35:41.716977 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:41 crc kubenswrapper[4662]: I1208 09:35:41.788276 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-hzhtj"] Dec 08 09:35:41 crc kubenswrapper[4662]: I1208 09:35:41.788545 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" podUID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerName="dnsmasq-dns" containerID="cri-o://3c4eb2d32bcdfbdda0723477705c114936207d29f03ac567476f431c382202f3" gracePeriod=10 Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.022311 4662 generic.go:334] "Generic (PLEG): container finished" podID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerID="3c4eb2d32bcdfbdda0723477705c114936207d29f03ac567476f431c382202f3" exitCode=0 Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.022555 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" event={"ID":"d732ac3b-c886-4109-b09e-780d6fd5b6f7","Type":"ContainerDied","Data":"3c4eb2d32bcdfbdda0723477705c114936207d29f03ac567476f431c382202f3"} Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.049303 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8585fc4db5-l6l9w"] Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.051504 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.094372 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8585fc4db5-l6l9w"] Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.173716 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-ovsdbserver-sb\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.173791 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-config\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.173881 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-ovsdbserver-nb\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.173917 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-dns-svc\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.174002 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-openstack-edpm-ipam\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.174046 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lr65\" (UniqueName: \"kubernetes.io/projected/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-kube-api-access-7lr65\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.275399 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-ovsdbserver-sb\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.275450 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-config\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.275522 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-ovsdbserver-nb\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.275547 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-dns-svc\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.275609 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-openstack-edpm-ipam\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.275640 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lr65\" (UniqueName: \"kubernetes.io/projected/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-kube-api-access-7lr65\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.276514 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-openstack-edpm-ipam\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.276849 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-ovsdbserver-sb\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.276991 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-ovsdbserver-nb\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.277211 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-dns-svc\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.277246 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-config\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.300768 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lr65\" (UniqueName: \"kubernetes.io/projected/8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a-kube-api-access-7lr65\") pod \"dnsmasq-dns-8585fc4db5-l6l9w\" (UID: \"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a\") " pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.415681 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.424362 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.581221 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-nb\") pod \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.581285 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-config\") pod \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.581302 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-dns-svc\") pod \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.581331 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-sb\") pod \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.581521 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9pzd7\" (UniqueName: \"kubernetes.io/projected/d732ac3b-c886-4109-b09e-780d6fd5b6f7-kube-api-access-9pzd7\") pod \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\" (UID: \"d732ac3b-c886-4109-b09e-780d6fd5b6f7\") " Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.596775 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d732ac3b-c886-4109-b09e-780d6fd5b6f7-kube-api-access-9pzd7" (OuterVolumeSpecName: "kube-api-access-9pzd7") pod "d732ac3b-c886-4109-b09e-780d6fd5b6f7" (UID: "d732ac3b-c886-4109-b09e-780d6fd5b6f7"). InnerVolumeSpecName "kube-api-access-9pzd7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.652411 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-config" (OuterVolumeSpecName: "config") pod "d732ac3b-c886-4109-b09e-780d6fd5b6f7" (UID: "d732ac3b-c886-4109-b09e-780d6fd5b6f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.656572 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d732ac3b-c886-4109-b09e-780d6fd5b6f7" (UID: "d732ac3b-c886-4109-b09e-780d6fd5b6f7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.678261 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d732ac3b-c886-4109-b09e-780d6fd5b6f7" (UID: "d732ac3b-c886-4109-b09e-780d6fd5b6f7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.683342 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.683371 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.683381 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9pzd7\" (UniqueName: \"kubernetes.io/projected/d732ac3b-c886-4109-b09e-780d6fd5b6f7-kube-api-access-9pzd7\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.683390 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.709776 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d732ac3b-c886-4109-b09e-780d6fd5b6f7" (UID: "d732ac3b-c886-4109-b09e-780d6fd5b6f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:42 crc kubenswrapper[4662]: I1208 09:35:42.785223 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d732ac3b-c886-4109-b09e-780d6fd5b6f7-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.015406 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8585fc4db5-l6l9w"] Dec 08 09:35:43 crc kubenswrapper[4662]: W1208 09:35:43.025927 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a3672e3_a6bc_4a6f_8eb7_accfd9c46c5a.slice/crio-38d3589572853ed06e169b2e45fbfb3e6f5da613f2229e8941b5ebc689d5a7a7 WatchSource:0}: Error finding container 38d3589572853ed06e169b2e45fbfb3e6f5da613f2229e8941b5ebc689d5a7a7: Status 404 returned error can't find the container with id 38d3589572853ed06e169b2e45fbfb3e6f5da613f2229e8941b5ebc689d5a7a7 Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.038023 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.039341 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68d4b6d797-hzhtj" event={"ID":"d732ac3b-c886-4109-b09e-780d6fd5b6f7","Type":"ContainerDied","Data":"86fc8170084981b3269deaf82e0de6f4d28e4d17b1c3d460ac39883a62f737c3"} Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.039446 4662 scope.go:117] "RemoveContainer" containerID="3c4eb2d32bcdfbdda0723477705c114936207d29f03ac567476f431c382202f3" Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.042127 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" event={"ID":"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a","Type":"ContainerStarted","Data":"38d3589572853ed06e169b2e45fbfb3e6f5da613f2229e8941b5ebc689d5a7a7"} Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.192387 4662 scope.go:117] "RemoveContainer" containerID="e21225caa0c4b50feb2638681a3859e3d80cc066758d00d42d1705b0db97a704" Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.219624 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-hzhtj"] Dec 08 09:35:43 crc kubenswrapper[4662]: I1208 09:35:43.227768 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68d4b6d797-hzhtj"] Dec 08 09:35:44 crc kubenswrapper[4662]: I1208 09:35:44.061678 4662 generic.go:334] "Generic (PLEG): container finished" podID="8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a" containerID="474dd77d64191b8efea606538b0353da97787a8a9b73220f6b4ef0fd9bf553c0" exitCode=0 Dec 08 09:35:44 crc kubenswrapper[4662]: I1208 09:35:44.061818 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" event={"ID":"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a","Type":"ContainerDied","Data":"474dd77d64191b8efea606538b0353da97787a8a9b73220f6b4ef0fd9bf553c0"} Dec 08 09:35:44 crc kubenswrapper[4662]: I1208 09:35:44.709565 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" path="/var/lib/kubelet/pods/d732ac3b-c886-4109-b09e-780d6fd5b6f7/volumes" Dec 08 09:35:45 crc kubenswrapper[4662]: I1208 09:35:45.072417 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" event={"ID":"8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a","Type":"ContainerStarted","Data":"2d34dc91b96070dbccd4094ae45382998242ad43fb8e6c9caacc874d6642af6e"} Dec 08 09:35:45 crc kubenswrapper[4662]: I1208 09:35:45.073574 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:45 crc kubenswrapper[4662]: I1208 09:35:45.101407 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" podStartSLOduration=3.101384744 podStartE2EDuration="3.101384744s" podCreationTimestamp="2025-12-08 09:35:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:35:45.095442985 +0000 UTC m=+1268.664470975" watchObservedRunningTime="2025-12-08 09:35:45.101384744 +0000 UTC m=+1268.670412734" Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.416949 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8585fc4db5-l6l9w" Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.474532 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zrg28"] Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.475018 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" podUID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerName="dnsmasq-dns" containerID="cri-o://df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a" gracePeriod=10 Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.962850 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.980316 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smdlr\" (UniqueName: \"kubernetes.io/projected/3268d878-714d-4446-ad3d-5ee1a80db91d-kube-api-access-smdlr\") pod \"3268d878-714d-4446-ad3d-5ee1a80db91d\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.980413 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-openstack-edpm-ipam\") pod \"3268d878-714d-4446-ad3d-5ee1a80db91d\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.980557 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-config\") pod \"3268d878-714d-4446-ad3d-5ee1a80db91d\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.980606 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-nb\") pod \"3268d878-714d-4446-ad3d-5ee1a80db91d\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.980655 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-sb\") pod \"3268d878-714d-4446-ad3d-5ee1a80db91d\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.980706 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-dns-svc\") pod \"3268d878-714d-4446-ad3d-5ee1a80db91d\" (UID: \"3268d878-714d-4446-ad3d-5ee1a80db91d\") " Dec 08 09:35:52 crc kubenswrapper[4662]: I1208 09:35:52.992101 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3268d878-714d-4446-ad3d-5ee1a80db91d-kube-api-access-smdlr" (OuterVolumeSpecName: "kube-api-access-smdlr") pod "3268d878-714d-4446-ad3d-5ee1a80db91d" (UID: "3268d878-714d-4446-ad3d-5ee1a80db91d"). InnerVolumeSpecName "kube-api-access-smdlr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.068230 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-config" (OuterVolumeSpecName: "config") pod "3268d878-714d-4446-ad3d-5ee1a80db91d" (UID: "3268d878-714d-4446-ad3d-5ee1a80db91d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.082426 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "3268d878-714d-4446-ad3d-5ee1a80db91d" (UID: "3268d878-714d-4446-ad3d-5ee1a80db91d"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.082812 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smdlr\" (UniqueName: \"kubernetes.io/projected/3268d878-714d-4446-ad3d-5ee1a80db91d-kube-api-access-smdlr\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.082843 4662 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.082853 4662 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-config\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.091373 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3268d878-714d-4446-ad3d-5ee1a80db91d" (UID: "3268d878-714d-4446-ad3d-5ee1a80db91d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.102270 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3268d878-714d-4446-ad3d-5ee1a80db91d" (UID: "3268d878-714d-4446-ad3d-5ee1a80db91d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.124292 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3268d878-714d-4446-ad3d-5ee1a80db91d" (UID: "3268d878-714d-4446-ad3d-5ee1a80db91d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.154422 4662 generic.go:334] "Generic (PLEG): container finished" podID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerID="df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a" exitCode=0 Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.154463 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" event={"ID":"3268d878-714d-4446-ad3d-5ee1a80db91d","Type":"ContainerDied","Data":"df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a"} Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.154499 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" event={"ID":"3268d878-714d-4446-ad3d-5ee1a80db91d","Type":"ContainerDied","Data":"74918bf098fce3e231ef567ad2f34647447264325d7d8137ba3101e91c6054b6"} Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.154519 4662 scope.go:117] "RemoveContainer" containerID="df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.154662 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-578b8d767c-zrg28" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.174626 4662 scope.go:117] "RemoveContainer" containerID="e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.183602 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.183819 4662 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.183879 4662 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3268d878-714d-4446-ad3d-5ee1a80db91d-dns-svc\") on node \"crc\" DevicePath \"\"" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.190967 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zrg28"] Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.198449 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-578b8d767c-zrg28"] Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.203981 4662 scope.go:117] "RemoveContainer" containerID="df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a" Dec 08 09:35:53 crc kubenswrapper[4662]: E1208 09:35:53.204507 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a\": container with ID starting with df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a not found: ID does not exist" containerID="df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.204543 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a"} err="failed to get container status \"df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a\": rpc error: code = NotFound desc = could not find container \"df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a\": container with ID starting with df7ca8ed5ce90030c82de3cdf9e97b28ad8f6b00817d6f94933bc54c900ecd3a not found: ID does not exist" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.204592 4662 scope.go:117] "RemoveContainer" containerID="e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9" Dec 08 09:35:53 crc kubenswrapper[4662]: E1208 09:35:53.204870 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9\": container with ID starting with e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9 not found: ID does not exist" containerID="e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9" Dec 08 09:35:53 crc kubenswrapper[4662]: I1208 09:35:53.204950 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9"} err="failed to get container status \"e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9\": rpc error: code = NotFound desc = could not find container \"e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9\": container with ID starting with e6f6ecb1b207c9130dfb9bba3fbd710d27d408191a46fd16e219551d613bf4e9 not found: ID does not exist" Dec 08 09:35:54 crc kubenswrapper[4662]: I1208 09:35:54.708934 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3268d878-714d-4446-ad3d-5ee1a80db91d" path="/var/lib/kubelet/pods/3268d878-714d-4446-ad3d-5ee1a80db91d/volumes" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.172566 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8"] Dec 08 09:35:58 crc kubenswrapper[4662]: E1208 09:35:58.173226 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerName="init" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.173238 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerName="init" Dec 08 09:35:58 crc kubenswrapper[4662]: E1208 09:35:58.173261 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerName="dnsmasq-dns" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.173267 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerName="dnsmasq-dns" Dec 08 09:35:58 crc kubenswrapper[4662]: E1208 09:35:58.173281 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerName="dnsmasq-dns" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.173287 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerName="dnsmasq-dns" Dec 08 09:35:58 crc kubenswrapper[4662]: E1208 09:35:58.173302 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerName="init" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.173308 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerName="init" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.173485 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="3268d878-714d-4446-ad3d-5ee1a80db91d" containerName="dnsmasq-dns" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.173505 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="d732ac3b-c886-4109-b09e-780d6fd5b6f7" containerName="dnsmasq-dns" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.174131 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.176008 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.176571 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.176864 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.179498 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.187331 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8"] Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.310185 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.310317 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rpj4\" (UniqueName: \"kubernetes.io/projected/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-kube-api-access-5rpj4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.310390 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.310416 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.412899 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.412956 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.412990 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.413092 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rpj4\" (UniqueName: \"kubernetes.io/projected/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-kube-api-access-5rpj4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.418521 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.419134 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.419903 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.430517 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rpj4\" (UniqueName: \"kubernetes.io/projected/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-kube-api-access-5rpj4\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:58 crc kubenswrapper[4662]: I1208 09:35:58.497710 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:35:59 crc kubenswrapper[4662]: I1208 09:35:59.059749 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8"] Dec 08 09:35:59 crc kubenswrapper[4662]: I1208 09:35:59.219016 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" event={"ID":"13e5cc43-bfc2-4f92-a602-48e510e7f9fe","Type":"ContainerStarted","Data":"2bdfb25c04f57906f7f97dc37936af2e9ce93a09a8bcf6f89286f00423c3a66b"} Dec 08 09:36:02 crc kubenswrapper[4662]: I1208 09:36:02.247304 4662 generic.go:334] "Generic (PLEG): container finished" podID="a7866efc-4d7d-4d74-907b-e01dbdeaefaa" containerID="c890cd870293c03937c60a744f2ccac1fbafe3fb17f8eb322a1a41fe711b376d" exitCode=0 Dec 08 09:36:02 crc kubenswrapper[4662]: I1208 09:36:02.247401 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7866efc-4d7d-4d74-907b-e01dbdeaefaa","Type":"ContainerDied","Data":"c890cd870293c03937c60a744f2ccac1fbafe3fb17f8eb322a1a41fe711b376d"} Dec 08 09:36:02 crc kubenswrapper[4662]: I1208 09:36:02.611904 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:36:02 crc kubenswrapper[4662]: I1208 09:36:02.612581 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:36:03 crc kubenswrapper[4662]: I1208 09:36:03.265119 4662 generic.go:334] "Generic (PLEG): container finished" podID="e86949df-33b5-4ea8-86fc-d8a9ed982826" containerID="f1f3be6116b9a8d6377bf3457a5fcfd19c808f29babe44ad705ee5d9f12c4af6" exitCode=0 Dec 08 09:36:03 crc kubenswrapper[4662]: I1208 09:36:03.265218 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e86949df-33b5-4ea8-86fc-d8a9ed982826","Type":"ContainerDied","Data":"f1f3be6116b9a8d6377bf3457a5fcfd19c808f29babe44ad705ee5d9f12c4af6"} Dec 08 09:36:03 crc kubenswrapper[4662]: I1208 09:36:03.272912 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"a7866efc-4d7d-4d74-907b-e01dbdeaefaa","Type":"ContainerStarted","Data":"6b1fcc624b4788f79a2ebc178f3bb2bcd0f23cd5072f3449b4e78047502fc3e4"} Dec 08 09:36:03 crc kubenswrapper[4662]: I1208 09:36:03.273111 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Dec 08 09:36:03 crc kubenswrapper[4662]: I1208 09:36:03.318418 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.318394538 podStartE2EDuration="36.318394538s" podCreationTimestamp="2025-12-08 09:35:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:36:03.313449446 +0000 UTC m=+1286.882477426" watchObservedRunningTime="2025-12-08 09:36:03.318394538 +0000 UTC m=+1286.887422528" Dec 08 09:36:10 crc kubenswrapper[4662]: I1208 09:36:10.353341 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" event={"ID":"13e5cc43-bfc2-4f92-a602-48e510e7f9fe","Type":"ContainerStarted","Data":"36a81a0a560e6f2037a9a3e68670ba4276bf1194dee03f41a9f3ad0a994aefc3"} Dec 08 09:36:10 crc kubenswrapper[4662]: I1208 09:36:10.356535 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"e86949df-33b5-4ea8-86fc-d8a9ed982826","Type":"ContainerStarted","Data":"7893b9a793f9729b1a8eccc17d9e43444a1bad1f603f29844510b9663fb5a3da"} Dec 08 09:36:10 crc kubenswrapper[4662]: I1208 09:36:10.356771 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:36:10 crc kubenswrapper[4662]: I1208 09:36:10.378521 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" podStartSLOduration=1.7101873159999998 podStartE2EDuration="12.378499782s" podCreationTimestamp="2025-12-08 09:35:58 +0000 UTC" firstStartedPulling="2025-12-08 09:35:59.066998375 +0000 UTC m=+1282.636026365" lastFinishedPulling="2025-12-08 09:36:09.735310831 +0000 UTC m=+1293.304338831" observedRunningTime="2025-12-08 09:36:10.370284141 +0000 UTC m=+1293.939312131" watchObservedRunningTime="2025-12-08 09:36:10.378499782 +0000 UTC m=+1293.947527802" Dec 08 09:36:10 crc kubenswrapper[4662]: I1208 09:36:10.401229 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.401212471 podStartE2EDuration="42.401212471s" podCreationTimestamp="2025-12-08 09:35:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 09:36:10.391818109 +0000 UTC m=+1293.960846099" watchObservedRunningTime="2025-12-08 09:36:10.401212471 +0000 UTC m=+1293.970240461" Dec 08 09:36:17 crc kubenswrapper[4662]: I1208 09:36:17.576078 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Dec 08 09:36:22 crc kubenswrapper[4662]: I1208 09:36:22.476104 4662 generic.go:334] "Generic (PLEG): container finished" podID="13e5cc43-bfc2-4f92-a602-48e510e7f9fe" containerID="36a81a0a560e6f2037a9a3e68670ba4276bf1194dee03f41a9f3ad0a994aefc3" exitCode=0 Dec 08 09:36:22 crc kubenswrapper[4662]: I1208 09:36:22.476203 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" event={"ID":"13e5cc43-bfc2-4f92-a602-48e510e7f9fe","Type":"ContainerDied","Data":"36a81a0a560e6f2037a9a3e68670ba4276bf1194dee03f41a9f3ad0a994aefc3"} Dec 08 09:36:23 crc kubenswrapper[4662]: I1208 09:36:23.876272 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.025692 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-inventory\") pod \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.025981 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rpj4\" (UniqueName: \"kubernetes.io/projected/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-kube-api-access-5rpj4\") pod \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.026060 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-repo-setup-combined-ca-bundle\") pod \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.026086 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-ssh-key\") pod \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\" (UID: \"13e5cc43-bfc2-4f92-a602-48e510e7f9fe\") " Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.038671 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "13e5cc43-bfc2-4f92-a602-48e510e7f9fe" (UID: "13e5cc43-bfc2-4f92-a602-48e510e7f9fe"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.038720 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-kube-api-access-5rpj4" (OuterVolumeSpecName: "kube-api-access-5rpj4") pod "13e5cc43-bfc2-4f92-a602-48e510e7f9fe" (UID: "13e5cc43-bfc2-4f92-a602-48e510e7f9fe"). InnerVolumeSpecName "kube-api-access-5rpj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.061726 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "13e5cc43-bfc2-4f92-a602-48e510e7f9fe" (UID: "13e5cc43-bfc2-4f92-a602-48e510e7f9fe"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.062718 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-inventory" (OuterVolumeSpecName: "inventory") pod "13e5cc43-bfc2-4f92-a602-48e510e7f9fe" (UID: "13e5cc43-bfc2-4f92-a602-48e510e7f9fe"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.128213 4662 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.128252 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.128268 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.128280 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rpj4\" (UniqueName: \"kubernetes.io/projected/13e5cc43-bfc2-4f92-a602-48e510e7f9fe-kube-api-access-5rpj4\") on node \"crc\" DevicePath \"\"" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.496228 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" event={"ID":"13e5cc43-bfc2-4f92-a602-48e510e7f9fe","Type":"ContainerDied","Data":"2bdfb25c04f57906f7f97dc37936af2e9ce93a09a8bcf6f89286f00423c3a66b"} Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.496275 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bdfb25c04f57906f7f97dc37936af2e9ce93a09a8bcf6f89286f00423c3a66b" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.496296 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.597955 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct"] Dec 08 09:36:24 crc kubenswrapper[4662]: E1208 09:36:24.599275 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13e5cc43-bfc2-4f92-a602-48e510e7f9fe" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.599302 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="13e5cc43-bfc2-4f92-a602-48e510e7f9fe" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.615384 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="13e5cc43-bfc2-4f92-a602-48e510e7f9fe" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.615966 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct"] Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.616048 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.620190 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.620387 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.623717 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.623964 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.739691 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.739765 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4ld\" (UniqueName: \"kubernetes.io/projected/138459d2-75b3-467c-9bf3-7b458dc202ad-kube-api-access-hv4ld\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.740046 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.740178 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.841822 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.842191 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.842354 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.842428 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4ld\" (UniqueName: \"kubernetes.io/projected/138459d2-75b3-467c-9bf3-7b458dc202ad-kube-api-access-hv4ld\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.845529 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.845837 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.864605 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4ld\" (UniqueName: \"kubernetes.io/projected/138459d2-75b3-467c-9bf3-7b458dc202ad-kube-api-access-hv4ld\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.871392 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:24 crc kubenswrapper[4662]: I1208 09:36:24.945538 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:36:25 crc kubenswrapper[4662]: I1208 09:36:25.447600 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct"] Dec 08 09:36:25 crc kubenswrapper[4662]: I1208 09:36:25.504414 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" event={"ID":"138459d2-75b3-467c-9bf3-7b458dc202ad","Type":"ContainerStarted","Data":"16ceef7afaed831b3b6dfc069483788b1fb4483eda8f6ef1861d86fd7c8c8fb7"} Dec 08 09:36:26 crc kubenswrapper[4662]: I1208 09:36:26.515701 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" event={"ID":"138459d2-75b3-467c-9bf3-7b458dc202ad","Type":"ContainerStarted","Data":"5fdc3ffc9d554b2a3a0115df2ba111a776d78c258545f35fe1e49560eb20f931"} Dec 08 09:36:26 crc kubenswrapper[4662]: I1208 09:36:26.541834 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" podStartSLOduration=2.112938746 podStartE2EDuration="2.541815255s" podCreationTimestamp="2025-12-08 09:36:24 +0000 UTC" firstStartedPulling="2025-12-08 09:36:25.453559109 +0000 UTC m=+1309.022587099" lastFinishedPulling="2025-12-08 09:36:25.882435608 +0000 UTC m=+1309.451463608" observedRunningTime="2025-12-08 09:36:26.532536706 +0000 UTC m=+1310.101564706" watchObservedRunningTime="2025-12-08 09:36:26.541815255 +0000 UTC m=+1310.110843245" Dec 08 09:36:28 crc kubenswrapper[4662]: I1208 09:36:28.559166 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Dec 08 09:36:32 crc kubenswrapper[4662]: I1208 09:36:32.611510 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:36:32 crc kubenswrapper[4662]: I1208 09:36:32.612107 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:36:32 crc kubenswrapper[4662]: I1208 09:36:32.612176 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:36:32 crc kubenswrapper[4662]: I1208 09:36:32.612917 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e9e9e908f5c82fe5008d57bd1d536ac24016344c62104f9a96f0f2c0d9a74ee"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:36:32 crc kubenswrapper[4662]: I1208 09:36:32.612976 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://2e9e9e908f5c82fe5008d57bd1d536ac24016344c62104f9a96f0f2c0d9a74ee" gracePeriod=600 Dec 08 09:36:33 crc kubenswrapper[4662]: I1208 09:36:33.594627 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="2e9e9e908f5c82fe5008d57bd1d536ac24016344c62104f9a96f0f2c0d9a74ee" exitCode=0 Dec 08 09:36:33 crc kubenswrapper[4662]: I1208 09:36:33.594703 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"2e9e9e908f5c82fe5008d57bd1d536ac24016344c62104f9a96f0f2c0d9a74ee"} Dec 08 09:36:33 crc kubenswrapper[4662]: I1208 09:36:33.595112 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e"} Dec 08 09:36:33 crc kubenswrapper[4662]: I1208 09:36:33.595134 4662 scope.go:117] "RemoveContainer" containerID="1a38cb2745e5efa64e1f916b4601a82b9010af6b77998742afd727eba45f786c" Dec 08 09:36:47 crc kubenswrapper[4662]: I1208 09:36:47.111469 4662 scope.go:117] "RemoveContainer" containerID="872fe1d0dde292a7ddd187e280826ab4cd43c9aba6797c243281ddb134822f95" Dec 08 09:37:47 crc kubenswrapper[4662]: I1208 09:37:47.176316 4662 scope.go:117] "RemoveContainer" containerID="02c7a2b96844cfeda772d85971739253ca38640415beb5e67820bde3741666b3" Dec 08 09:38:32 crc kubenswrapper[4662]: I1208 09:38:32.611578 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:38:32 crc kubenswrapper[4662]: I1208 09:38:32.612213 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:38:47 crc kubenswrapper[4662]: I1208 09:38:47.244636 4662 scope.go:117] "RemoveContainer" containerID="bcba7c1746701752c693dc17f64d5b7c10fa7c5d15894792b2143134bad911dc" Dec 08 09:38:47 crc kubenswrapper[4662]: I1208 09:38:47.265548 4662 scope.go:117] "RemoveContainer" containerID="de6bdd347068e8a7c063bb7a8d072308fc12f21e8ae059d4425c30046b69414c" Dec 08 09:38:47 crc kubenswrapper[4662]: I1208 09:38:47.287482 4662 scope.go:117] "RemoveContainer" containerID="f26c40d7b850fc3379360256964fb99f0ff0b14e58189fc610511c56c6d61b09" Dec 08 09:38:47 crc kubenswrapper[4662]: I1208 09:38:47.326111 4662 scope.go:117] "RemoveContainer" containerID="2ef96a4ea6409f3f1df47354f6ea661afdad58a59076951524ac5784e4147712" Dec 08 09:39:02 crc kubenswrapper[4662]: I1208 09:39:02.611033 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:39:02 crc kubenswrapper[4662]: I1208 09:39:02.612262 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.762056 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j5dfj"] Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.764686 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.774978 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5dfj"] Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.882611 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b48c93a-7bd3-472e-b21d-59eb414549d1-utilities\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.882698 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b48c93a-7bd3-472e-b21d-59eb414549d1-catalog-content\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.882765 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnm6\" (UniqueName: \"kubernetes.io/projected/2b48c93a-7bd3-472e-b21d-59eb414549d1-kube-api-access-pnnm6\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.984586 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b48c93a-7bd3-472e-b21d-59eb414549d1-catalog-content\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.984677 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnm6\" (UniqueName: \"kubernetes.io/projected/2b48c93a-7bd3-472e-b21d-59eb414549d1-kube-api-access-pnnm6\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.984807 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b48c93a-7bd3-472e-b21d-59eb414549d1-utilities\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.985211 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2b48c93a-7bd3-472e-b21d-59eb414549d1-utilities\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:31 crc kubenswrapper[4662]: I1208 09:39:31.985698 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2b48c93a-7bd3-472e-b21d-59eb414549d1-catalog-content\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.006624 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnm6\" (UniqueName: \"kubernetes.io/projected/2b48c93a-7bd3-472e-b21d-59eb414549d1-kube-api-access-pnnm6\") pod \"redhat-operators-j5dfj\" (UID: \"2b48c93a-7bd3-472e-b21d-59eb414549d1\") " pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.099427 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.534382 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5dfj"] Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.613606 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.614096 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.614159 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.615191 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:39:32 crc kubenswrapper[4662]: I1208 09:39:32.616195 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" gracePeriod=600 Dec 08 09:39:32 crc kubenswrapper[4662]: E1208 09:39:32.755959 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.432722 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" exitCode=0 Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.433938 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e"} Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.434296 4662 scope.go:117] "RemoveContainer" containerID="2e9e9e908f5c82fe5008d57bd1d536ac24016344c62104f9a96f0f2c0d9a74ee" Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.434882 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:39:33 crc kubenswrapper[4662]: E1208 09:39:33.435150 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.435766 4662 generic.go:334] "Generic (PLEG): container finished" podID="2b48c93a-7bd3-472e-b21d-59eb414549d1" containerID="863882244d2adda118a5471bc21a4c2521c92c34ba7d5f20a426ef705e44a3b9" exitCode=0 Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.435951 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5dfj" event={"ID":"2b48c93a-7bd3-472e-b21d-59eb414549d1","Type":"ContainerDied","Data":"863882244d2adda118a5471bc21a4c2521c92c34ba7d5f20a426ef705e44a3b9"} Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.436022 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5dfj" event={"ID":"2b48c93a-7bd3-472e-b21d-59eb414549d1","Type":"ContainerStarted","Data":"09bf658fed9c6612e039294e9694a1e1527f2562b14fa8fb0719e59d1f512007"} Dec 08 09:39:33 crc kubenswrapper[4662]: I1208 09:39:33.438970 4662 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 09:39:44 crc kubenswrapper[4662]: I1208 09:39:44.544155 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5dfj" event={"ID":"2b48c93a-7bd3-472e-b21d-59eb414549d1","Type":"ContainerStarted","Data":"6d12d6140ac32d9f6d985127f8955f7eba42f7ea9599db888389f11858fd496e"} Dec 08 09:39:44 crc kubenswrapper[4662]: I1208 09:39:44.697519 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:39:44 crc kubenswrapper[4662]: E1208 09:39:44.697929 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:39:44 crc kubenswrapper[4662]: E1208 09:39:44.811549 4662 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b48c93a_7bd3_472e_b21d_59eb414549d1.slice/crio-6d12d6140ac32d9f6d985127f8955f7eba42f7ea9599db888389f11858fd496e.scope\": RecentStats: unable to find data in memory cache]" Dec 08 09:39:46 crc kubenswrapper[4662]: I1208 09:39:46.565705 4662 generic.go:334] "Generic (PLEG): container finished" podID="2b48c93a-7bd3-472e-b21d-59eb414549d1" containerID="6d12d6140ac32d9f6d985127f8955f7eba42f7ea9599db888389f11858fd496e" exitCode=0 Dec 08 09:39:46 crc kubenswrapper[4662]: I1208 09:39:46.565864 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5dfj" event={"ID":"2b48c93a-7bd3-472e-b21d-59eb414549d1","Type":"ContainerDied","Data":"6d12d6140ac32d9f6d985127f8955f7eba42f7ea9599db888389f11858fd496e"} Dec 08 09:39:47 crc kubenswrapper[4662]: I1208 09:39:47.578862 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j5dfj" event={"ID":"2b48c93a-7bd3-472e-b21d-59eb414549d1","Type":"ContainerStarted","Data":"6288b4bc328af88814b6ac96fc5b3e79e3207a1a6db0ac042c9d9ea0576dd1df"} Dec 08 09:39:52 crc kubenswrapper[4662]: I1208 09:39:52.099667 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:52 crc kubenswrapper[4662]: I1208 09:39:52.100216 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:39:53 crc kubenswrapper[4662]: I1208 09:39:53.159626 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j5dfj" podUID="2b48c93a-7bd3-472e-b21d-59eb414549d1" containerName="registry-server" probeResult="failure" output=< Dec 08 09:39:53 crc kubenswrapper[4662]: timeout: failed to connect service ":50051" within 1s Dec 08 09:39:53 crc kubenswrapper[4662]: > Dec 08 09:39:53 crc kubenswrapper[4662]: I1208 09:39:53.628440 4662 generic.go:334] "Generic (PLEG): container finished" podID="138459d2-75b3-467c-9bf3-7b458dc202ad" containerID="5fdc3ffc9d554b2a3a0115df2ba111a776d78c258545f35fe1e49560eb20f931" exitCode=0 Dec 08 09:39:53 crc kubenswrapper[4662]: I1208 09:39:53.628507 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" event={"ID":"138459d2-75b3-467c-9bf3-7b458dc202ad","Type":"ContainerDied","Data":"5fdc3ffc9d554b2a3a0115df2ba111a776d78c258545f35fe1e49560eb20f931"} Dec 08 09:39:53 crc kubenswrapper[4662]: I1208 09:39:53.649099 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j5dfj" podStartSLOduration=9.040351612 podStartE2EDuration="22.64888299s" podCreationTimestamp="2025-12-08 09:39:31 +0000 UTC" firstStartedPulling="2025-12-08 09:39:33.438650965 +0000 UTC m=+1497.007678955" lastFinishedPulling="2025-12-08 09:39:47.047182333 +0000 UTC m=+1510.616210333" observedRunningTime="2025-12-08 09:39:47.601105624 +0000 UTC m=+1511.170133614" watchObservedRunningTime="2025-12-08 09:39:53.64888299 +0000 UTC m=+1517.217910980" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.092573 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.192308 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-bootstrap-combined-ca-bundle\") pod \"138459d2-75b3-467c-9bf3-7b458dc202ad\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.192422 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-inventory\") pod \"138459d2-75b3-467c-9bf3-7b458dc202ad\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.192490 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hv4ld\" (UniqueName: \"kubernetes.io/projected/138459d2-75b3-467c-9bf3-7b458dc202ad-kube-api-access-hv4ld\") pod \"138459d2-75b3-467c-9bf3-7b458dc202ad\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.192552 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-ssh-key\") pod \"138459d2-75b3-467c-9bf3-7b458dc202ad\" (UID: \"138459d2-75b3-467c-9bf3-7b458dc202ad\") " Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.198436 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/138459d2-75b3-467c-9bf3-7b458dc202ad-kube-api-access-hv4ld" (OuterVolumeSpecName: "kube-api-access-hv4ld") pod "138459d2-75b3-467c-9bf3-7b458dc202ad" (UID: "138459d2-75b3-467c-9bf3-7b458dc202ad"). InnerVolumeSpecName "kube-api-access-hv4ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.199629 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "138459d2-75b3-467c-9bf3-7b458dc202ad" (UID: "138459d2-75b3-467c-9bf3-7b458dc202ad"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.226488 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-inventory" (OuterVolumeSpecName: "inventory") pod "138459d2-75b3-467c-9bf3-7b458dc202ad" (UID: "138459d2-75b3-467c-9bf3-7b458dc202ad"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.229733 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "138459d2-75b3-467c-9bf3-7b458dc202ad" (UID: "138459d2-75b3-467c-9bf3-7b458dc202ad"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.294887 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.294927 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hv4ld\" (UniqueName: \"kubernetes.io/projected/138459d2-75b3-467c-9bf3-7b458dc202ad-kube-api-access-hv4ld\") on node \"crc\" DevicePath \"\"" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.294939 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.294949 4662 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/138459d2-75b3-467c-9bf3-7b458dc202ad-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.655122 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" event={"ID":"138459d2-75b3-467c-9bf3-7b458dc202ad","Type":"ContainerDied","Data":"16ceef7afaed831b3b6dfc069483788b1fb4483eda8f6ef1861d86fd7c8c8fb7"} Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.655158 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16ceef7afaed831b3b6dfc069483788b1fb4483eda8f6ef1861d86fd7c8c8fb7" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.655201 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.737504 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc"] Dec 08 09:39:55 crc kubenswrapper[4662]: E1208 09:39:55.738157 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="138459d2-75b3-467c-9bf3-7b458dc202ad" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.738175 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="138459d2-75b3-467c-9bf3-7b458dc202ad" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.738336 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="138459d2-75b3-467c-9bf3-7b458dc202ad" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.739111 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.741403 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.741635 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.742388 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.742697 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.762603 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc"] Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.907515 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hscfd\" (UniqueName: \"kubernetes.io/projected/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-kube-api-access-hscfd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.907638 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:55 crc kubenswrapper[4662]: I1208 09:39:55.907774 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.009136 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hscfd\" (UniqueName: \"kubernetes.io/projected/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-kube-api-access-hscfd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.009286 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.010277 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.013983 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.014203 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.035939 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hscfd\" (UniqueName: \"kubernetes.io/projected/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-kube-api-access-hscfd\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-q42zc\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.074080 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.636874 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc"] Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.668495 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" event={"ID":"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c","Type":"ContainerStarted","Data":"e2bcc3099dc798385aaf90c2a02329a582231108346d5e71f3a51c875af00747"} Dec 08 09:39:56 crc kubenswrapper[4662]: I1208 09:39:56.708054 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:39:56 crc kubenswrapper[4662]: E1208 09:39:56.708359 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:39:57 crc kubenswrapper[4662]: I1208 09:39:57.682919 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" event={"ID":"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c","Type":"ContainerStarted","Data":"ab13f20d4f44318f836eb31d812655c1680010af05bc1bae1d6ae71f77e7fbb7"} Dec 08 09:39:57 crc kubenswrapper[4662]: I1208 09:39:57.703795 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" podStartSLOduration=2.292586786 podStartE2EDuration="2.703778505s" podCreationTimestamp="2025-12-08 09:39:55 +0000 UTC" firstStartedPulling="2025-12-08 09:39:56.643528242 +0000 UTC m=+1520.212556222" lastFinishedPulling="2025-12-08 09:39:57.054719951 +0000 UTC m=+1520.623747941" observedRunningTime="2025-12-08 09:39:57.702995104 +0000 UTC m=+1521.272023094" watchObservedRunningTime="2025-12-08 09:39:57.703778505 +0000 UTC m=+1521.272806495" Dec 08 09:40:02 crc kubenswrapper[4662]: I1208 09:40:02.140719 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:40:02 crc kubenswrapper[4662]: I1208 09:40:02.192115 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j5dfj" Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.088460 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j5dfj"] Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.265914 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cr7wn"] Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.266160 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cr7wn" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="registry-server" containerID="cri-o://f7a9b7a04287bc8225c16500bd40ee9f8f2c36050da1a17b62784d46f5430374" gracePeriod=2 Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.468171 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwq58"] Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.468717 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hwq58" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="registry-server" containerID="cri-o://33eb88e67cdd43d1b8bc9af05fa455df5cda07335c1927ec6fb632c6f6911845" gracePeriod=2 Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.665534 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lhzr"] Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.667092 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-8lhzr" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="registry-server" containerID="cri-o://7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6" gracePeriod=2 Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.740692 4662 generic.go:334] "Generic (PLEG): container finished" podID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerID="f7a9b7a04287bc8225c16500bd40ee9f8f2c36050da1a17b62784d46f5430374" exitCode=0 Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.741859 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerDied","Data":"f7a9b7a04287bc8225c16500bd40ee9f8f2c36050da1a17b62784d46f5430374"} Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.860123 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.865505 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7b754"] Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.865822 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7b754" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="registry-server" containerID="cri-o://eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8" gracePeriod=2 Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.904014 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-utilities\") pod \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.904179 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-catalog-content\") pod \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.904216 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgccc\" (UniqueName: \"kubernetes.io/projected/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-kube-api-access-sgccc\") pod \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\" (UID: \"ab30ee95-a03b-4653-a1d0-e9fb3f101af9\") " Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.905038 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-utilities" (OuterVolumeSpecName: "utilities") pod "ab30ee95-a03b-4653-a1d0-e9fb3f101af9" (UID: "ab30ee95-a03b-4653-a1d0-e9fb3f101af9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:03 crc kubenswrapper[4662]: I1208 09:40:03.927759 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-kube-api-access-sgccc" (OuterVolumeSpecName: "kube-api-access-sgccc") pod "ab30ee95-a03b-4653-a1d0-e9fb3f101af9" (UID: "ab30ee95-a03b-4653-a1d0-e9fb3f101af9"). InnerVolumeSpecName "kube-api-access-sgccc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.009016 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgccc\" (UniqueName: \"kubernetes.io/projected/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-kube-api-access-sgccc\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.009054 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.081361 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ab30ee95-a03b-4653-a1d0-e9fb3f101af9" (UID: "ab30ee95-a03b-4653-a1d0-e9fb3f101af9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.110318 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ab30ee95-a03b-4653-a1d0-e9fb3f101af9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.449078 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.624837 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-catalog-content\") pod \"f70914b3-e872-476a-b468-f380adce1373\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.624929 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-utilities\") pod \"f70914b3-e872-476a-b468-f380adce1373\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.624983 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd2hb\" (UniqueName: \"kubernetes.io/projected/f70914b3-e872-476a-b468-f380adce1373-kube-api-access-fd2hb\") pod \"f70914b3-e872-476a-b468-f380adce1373\" (UID: \"f70914b3-e872-476a-b468-f380adce1373\") " Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.634310 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-utilities" (OuterVolumeSpecName: "utilities") pod "f70914b3-e872-476a-b468-f380adce1373" (UID: "f70914b3-e872-476a-b468-f380adce1373"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.635453 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70914b3-e872-476a-b468-f380adce1373-kube-api-access-fd2hb" (OuterVolumeSpecName: "kube-api-access-fd2hb") pod "f70914b3-e872-476a-b468-f380adce1373" (UID: "f70914b3-e872-476a-b468-f380adce1373"). InnerVolumeSpecName "kube-api-access-fd2hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.684830 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f70914b3-e872-476a-b468-f380adce1373" (UID: "f70914b3-e872-476a-b468-f380adce1373"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.723146 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.727485 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.727518 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f70914b3-e872-476a-b468-f380adce1373-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.727531 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd2hb\" (UniqueName: \"kubernetes.io/projected/f70914b3-e872-476a-b468-f380adce1373-kube-api-access-fd2hb\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.765442 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cr7wn" event={"ID":"ab30ee95-a03b-4653-a1d0-e9fb3f101af9","Type":"ContainerDied","Data":"07565f541d6f91406d3f82fc3ec56a31bc89f267e392ba326c30c06c80e4d13f"} Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.765500 4662 scope.go:117] "RemoveContainer" containerID="f7a9b7a04287bc8225c16500bd40ee9f8f2c36050da1a17b62784d46f5430374" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.765776 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cr7wn" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.783066 4662 generic.go:334] "Generic (PLEG): container finished" podID="f70914b3-e872-476a-b468-f380adce1373" containerID="7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6" exitCode=0 Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.783162 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lhzr" event={"ID":"f70914b3-e872-476a-b468-f380adce1373","Type":"ContainerDied","Data":"7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6"} Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.783185 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-8lhzr" event={"ID":"f70914b3-e872-476a-b468-f380adce1373","Type":"ContainerDied","Data":"7119ff8c820d03550650f05567310a639d4ae77f60f31b6523f53fed9f24282e"} Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.785140 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-8lhzr" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.793782 4662 generic.go:334] "Generic (PLEG): container finished" podID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerID="eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8" exitCode=0 Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.793853 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b754" event={"ID":"23663fdd-b72b-4956-9b86-5da768bcf9c1","Type":"ContainerDied","Data":"eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8"} Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.793880 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7b754" event={"ID":"23663fdd-b72b-4956-9b86-5da768bcf9c1","Type":"ContainerDied","Data":"4cadd8a871eb788b1b13c55a0ed884cf4fa72fbee46befd970d8f384dcfe15d4"} Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.794008 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7b754" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.819567 4662 scope.go:117] "RemoveContainer" containerID="d71430b05f1ee3c1443923750e26a70e37b29ef0b611035d08a4e9a37403e23d" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.836329 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-utilities\") pod \"23663fdd-b72b-4956-9b86-5da768bcf9c1\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.836424 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-catalog-content\") pod \"23663fdd-b72b-4956-9b86-5da768bcf9c1\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.836488 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnqqb\" (UniqueName: \"kubernetes.io/projected/23663fdd-b72b-4956-9b86-5da768bcf9c1-kube-api-access-mnqqb\") pod \"23663fdd-b72b-4956-9b86-5da768bcf9c1\" (UID: \"23663fdd-b72b-4956-9b86-5da768bcf9c1\") " Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.838033 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-utilities" (OuterVolumeSpecName: "utilities") pod "23663fdd-b72b-4956-9b86-5da768bcf9c1" (UID: "23663fdd-b72b-4956-9b86-5da768bcf9c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.847221 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.864163 4662 scope.go:117] "RemoveContainer" containerID="caa4ef45fba22b6b56b6ee3f8f7bc45bc38f223418a5c1b7530a323b7d8c6d8f" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.878863 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23663fdd-b72b-4956-9b86-5da768bcf9c1-kube-api-access-mnqqb" (OuterVolumeSpecName: "kube-api-access-mnqqb") pod "23663fdd-b72b-4956-9b86-5da768bcf9c1" (UID: "23663fdd-b72b-4956-9b86-5da768bcf9c1"). InnerVolumeSpecName "kube-api-access-mnqqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.910585 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cr7wn"] Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.930800 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cr7wn"] Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.951830 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnqqb\" (UniqueName: \"kubernetes.io/projected/23663fdd-b72b-4956-9b86-5da768bcf9c1-kube-api-access-mnqqb\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.954182 4662 scope.go:117] "RemoveContainer" containerID="7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.955423 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-8lhzr"] Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.978019 4662 scope.go:117] "RemoveContainer" containerID="a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e" Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.978041 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-8lhzr"] Dec 08 09:40:04 crc kubenswrapper[4662]: I1208 09:40:04.986112 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23663fdd-b72b-4956-9b86-5da768bcf9c1" (UID: "23663fdd-b72b-4956-9b86-5da768bcf9c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.008951 4662 scope.go:117] "RemoveContainer" containerID="6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.032899 4662 scope.go:117] "RemoveContainer" containerID="7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6" Dec 08 09:40:05 crc kubenswrapper[4662]: E1208 09:40:05.033454 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6\": container with ID starting with 7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6 not found: ID does not exist" containerID="7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.033557 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6"} err="failed to get container status \"7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6\": rpc error: code = NotFound desc = could not find container \"7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6\": container with ID starting with 7e9f16e08c0863ed7b8c921b22c56ad5afdd1eb13954dc113b6429d927507ab6 not found: ID does not exist" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.033651 4662 scope.go:117] "RemoveContainer" containerID="a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e" Dec 08 09:40:05 crc kubenswrapper[4662]: E1208 09:40:05.034199 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e\": container with ID starting with a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e not found: ID does not exist" containerID="a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.034241 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e"} err="failed to get container status \"a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e\": rpc error: code = NotFound desc = could not find container \"a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e\": container with ID starting with a3875079891baa0d9ee0996f0eb40517b4acdbd738fed007911ed19199a5d74e not found: ID does not exist" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.034279 4662 scope.go:117] "RemoveContainer" containerID="6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6" Dec 08 09:40:05 crc kubenswrapper[4662]: E1208 09:40:05.034785 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6\": container with ID starting with 6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6 not found: ID does not exist" containerID="6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.034819 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6"} err="failed to get container status \"6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6\": rpc error: code = NotFound desc = could not find container \"6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6\": container with ID starting with 6b0b10dbd5f83d7a07da8eb6a87060acc9b2ba6f548dd5d4b69a690365314ca6 not found: ID does not exist" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.034843 4662 scope.go:117] "RemoveContainer" containerID="eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.053136 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23663fdd-b72b-4956-9b86-5da768bcf9c1-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.054416 4662 scope.go:117] "RemoveContainer" containerID="9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.079650 4662 scope.go:117] "RemoveContainer" containerID="c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.122061 4662 scope.go:117] "RemoveContainer" containerID="eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8" Dec 08 09:40:05 crc kubenswrapper[4662]: E1208 09:40:05.123281 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8\": container with ID starting with eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8 not found: ID does not exist" containerID="eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.123362 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8"} err="failed to get container status \"eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8\": rpc error: code = NotFound desc = could not find container \"eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8\": container with ID starting with eef3e837e552d139fb621fcbe4ffc419ec2eac909570f43a0ff45f9462fa86f8 not found: ID does not exist" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.123398 4662 scope.go:117] "RemoveContainer" containerID="9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de" Dec 08 09:40:05 crc kubenswrapper[4662]: E1208 09:40:05.123846 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de\": container with ID starting with 9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de not found: ID does not exist" containerID="9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.123889 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de"} err="failed to get container status \"9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de\": rpc error: code = NotFound desc = could not find container \"9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de\": container with ID starting with 9926af84b8edd3de214e417a248d1c16cdfa8d9bbe288f420a3226ee402d39de not found: ID does not exist" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.123915 4662 scope.go:117] "RemoveContainer" containerID="c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8" Dec 08 09:40:05 crc kubenswrapper[4662]: E1208 09:40:05.124172 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8\": container with ID starting with c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8 not found: ID does not exist" containerID="c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.124197 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8"} err="failed to get container status \"c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8\": rpc error: code = NotFound desc = could not find container \"c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8\": container with ID starting with c3010867de5995a8e7e59d8d12295ce610ad9384d0a34e01acd65ff3d50d0bb8 not found: ID does not exist" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.150368 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7b754"] Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.166352 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7b754"] Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.805026 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwq58_10201fba-2819-468a-ac74-115ada895ae7/registry-server/0.log" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.806068 4662 generic.go:334] "Generic (PLEG): container finished" podID="10201fba-2819-468a-ac74-115ada895ae7" containerID="33eb88e67cdd43d1b8bc9af05fa455df5cda07335c1927ec6fb632c6f6911845" exitCode=137 Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.806131 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwq58" event={"ID":"10201fba-2819-468a-ac74-115ada895ae7","Type":"ContainerDied","Data":"33eb88e67cdd43d1b8bc9af05fa455df5cda07335c1927ec6fb632c6f6911845"} Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.963291 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwq58_10201fba-2819-468a-ac74-115ada895ae7/registry-server/0.log" Dec 08 09:40:05 crc kubenswrapper[4662]: I1208 09:40:05.966241 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.072805 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-utilities\") pod \"10201fba-2819-468a-ac74-115ada895ae7\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.073030 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zflcz\" (UniqueName: \"kubernetes.io/projected/10201fba-2819-468a-ac74-115ada895ae7-kube-api-access-zflcz\") pod \"10201fba-2819-468a-ac74-115ada895ae7\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.073314 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-utilities" (OuterVolumeSpecName: "utilities") pod "10201fba-2819-468a-ac74-115ada895ae7" (UID: "10201fba-2819-468a-ac74-115ada895ae7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.073794 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-catalog-content\") pod \"10201fba-2819-468a-ac74-115ada895ae7\" (UID: \"10201fba-2819-468a-ac74-115ada895ae7\") " Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.076073 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.077476 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10201fba-2819-468a-ac74-115ada895ae7-kube-api-access-zflcz" (OuterVolumeSpecName: "kube-api-access-zflcz") pod "10201fba-2819-468a-ac74-115ada895ae7" (UID: "10201fba-2819-468a-ac74-115ada895ae7"). InnerVolumeSpecName "kube-api-access-zflcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.117468 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10201fba-2819-468a-ac74-115ada895ae7" (UID: "10201fba-2819-468a-ac74-115ada895ae7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.177854 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zflcz\" (UniqueName: \"kubernetes.io/projected/10201fba-2819-468a-ac74-115ada895ae7-kube-api-access-zflcz\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.177885 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10201fba-2819-468a-ac74-115ada895ae7-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.707962 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" path="/var/lib/kubelet/pods/23663fdd-b72b-4956-9b86-5da768bcf9c1/volumes" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.708813 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" path="/var/lib/kubelet/pods/ab30ee95-a03b-4653-a1d0-e9fb3f101af9/volumes" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.709543 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f70914b3-e872-476a-b468-f380adce1373" path="/var/lib/kubelet/pods/f70914b3-e872-476a-b468-f380adce1373/volumes" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.837004 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hwq58_10201fba-2819-468a-ac74-115ada895ae7/registry-server/0.log" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.838694 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hwq58" event={"ID":"10201fba-2819-468a-ac74-115ada895ae7","Type":"ContainerDied","Data":"f7d020b67b93d90c0875141944b4070d04276425c17101361f83d0ab986ee8ad"} Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.838750 4662 scope.go:117] "RemoveContainer" containerID="33eb88e67cdd43d1b8bc9af05fa455df5cda07335c1927ec6fb632c6f6911845" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.838805 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hwq58" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.860100 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hwq58"] Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.863307 4662 scope.go:117] "RemoveContainer" containerID="2f4c7c69faa284c28b66b53b915abc7408cd911ee7d5eaa97ff2a594a1247447" Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.868124 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hwq58"] Dec 08 09:40:06 crc kubenswrapper[4662]: I1208 09:40:06.893368 4662 scope.go:117] "RemoveContainer" containerID="5127789e4675b48c9212bf974fd4244dee84e28f67889571c26fc258917876a3" Dec 08 09:40:08 crc kubenswrapper[4662]: I1208 09:40:08.698734 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:40:08 crc kubenswrapper[4662]: E1208 09:40:08.699812 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:40:08 crc kubenswrapper[4662]: I1208 09:40:08.708475 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10201fba-2819-468a-ac74-115ada895ae7" path="/var/lib/kubelet/pods/10201fba-2819-468a-ac74-115ada895ae7/volumes" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.889663 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nq9pb"] Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890110 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890122 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890136 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890142 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890156 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890164 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890174 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890180 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890195 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890201 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890213 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890219 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890240 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890247 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890258 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890263 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890277 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890284 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890293 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890299 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890335 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890341 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="extract-utilities" Dec 08 09:40:09 crc kubenswrapper[4662]: E1208 09:40:09.890355 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890361 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="extract-content" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890518 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ab30ee95-a03b-4653-a1d0-e9fb3f101af9" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890536 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="10201fba-2819-468a-ac74-115ada895ae7" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890549 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="23663fdd-b72b-4956-9b86-5da768bcf9c1" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.890561 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70914b3-e872-476a-b468-f380adce1373" containerName="registry-server" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.892119 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:09 crc kubenswrapper[4662]: I1208 09:40:09.903370 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq9pb"] Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.049058 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-utilities\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.049116 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qffhj\" (UniqueName: \"kubernetes.io/projected/a1bfd179-7bd5-4cdf-af38-ec557ceff479-kube-api-access-qffhj\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.049214 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-catalog-content\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.150972 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-catalog-content\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.151067 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-utilities\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.151112 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qffhj\" (UniqueName: \"kubernetes.io/projected/a1bfd179-7bd5-4cdf-af38-ec557ceff479-kube-api-access-qffhj\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.151578 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-catalog-content\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.151990 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-utilities\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.172852 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qffhj\" (UniqueName: \"kubernetes.io/projected/a1bfd179-7bd5-4cdf-af38-ec557ceff479-kube-api-access-qffhj\") pod \"redhat-marketplace-nq9pb\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.228534 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.692956 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq9pb"] Dec 08 09:40:10 crc kubenswrapper[4662]: W1208 09:40:10.730266 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1bfd179_7bd5_4cdf_af38_ec557ceff479.slice/crio-75478951c696e89d38fe424ad03de1e1e46059e987d16324a974dd7e923f3225 WatchSource:0}: Error finding container 75478951c696e89d38fe424ad03de1e1e46059e987d16324a974dd7e923f3225: Status 404 returned error can't find the container with id 75478951c696e89d38fe424ad03de1e1e46059e987d16324a974dd7e923f3225 Dec 08 09:40:10 crc kubenswrapper[4662]: I1208 09:40:10.882700 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq9pb" event={"ID":"a1bfd179-7bd5-4cdf-af38-ec557ceff479","Type":"ContainerStarted","Data":"75478951c696e89d38fe424ad03de1e1e46059e987d16324a974dd7e923f3225"} Dec 08 09:40:11 crc kubenswrapper[4662]: I1208 09:40:11.895124 4662 generic.go:334] "Generic (PLEG): container finished" podID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerID="7b34eb396211746192e2e65e72daa5dda93ee026d3516fdc50d5e64f1612ee52" exitCode=0 Dec 08 09:40:11 crc kubenswrapper[4662]: I1208 09:40:11.895168 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq9pb" event={"ID":"a1bfd179-7bd5-4cdf-af38-ec557ceff479","Type":"ContainerDied","Data":"7b34eb396211746192e2e65e72daa5dda93ee026d3516fdc50d5e64f1612ee52"} Dec 08 09:40:12 crc kubenswrapper[4662]: I1208 09:40:12.906443 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq9pb" event={"ID":"a1bfd179-7bd5-4cdf-af38-ec557ceff479","Type":"ContainerStarted","Data":"227067d874c6d46b6f16d36312a6904f2531621e93f69c2a488e4e942f9565d2"} Dec 08 09:40:13 crc kubenswrapper[4662]: I1208 09:40:13.917457 4662 generic.go:334] "Generic (PLEG): container finished" podID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerID="227067d874c6d46b6f16d36312a6904f2531621e93f69c2a488e4e942f9565d2" exitCode=0 Dec 08 09:40:13 crc kubenswrapper[4662]: I1208 09:40:13.917551 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq9pb" event={"ID":"a1bfd179-7bd5-4cdf-af38-ec557ceff479","Type":"ContainerDied","Data":"227067d874c6d46b6f16d36312a6904f2531621e93f69c2a488e4e942f9565d2"} Dec 08 09:40:14 crc kubenswrapper[4662]: I1208 09:40:14.933775 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq9pb" event={"ID":"a1bfd179-7bd5-4cdf-af38-ec557ceff479","Type":"ContainerStarted","Data":"ee4dfddd6767a1ff1b032229b590c64c4eb94086daf9ed08e1a804e3cb9a45cb"} Dec 08 09:40:14 crc kubenswrapper[4662]: I1208 09:40:14.954286 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nq9pb" podStartSLOduration=3.346475445 podStartE2EDuration="5.954266863s" podCreationTimestamp="2025-12-08 09:40:09 +0000 UTC" firstStartedPulling="2025-12-08 09:40:11.898679504 +0000 UTC m=+1535.467707494" lastFinishedPulling="2025-12-08 09:40:14.506470932 +0000 UTC m=+1538.075498912" observedRunningTime="2025-12-08 09:40:14.950241045 +0000 UTC m=+1538.519269035" watchObservedRunningTime="2025-12-08 09:40:14.954266863 +0000 UTC m=+1538.523294853" Dec 08 09:40:20 crc kubenswrapper[4662]: I1208 09:40:20.228676 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:20 crc kubenswrapper[4662]: I1208 09:40:20.229247 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:20 crc kubenswrapper[4662]: I1208 09:40:20.282110 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:20 crc kubenswrapper[4662]: I1208 09:40:20.698796 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:40:20 crc kubenswrapper[4662]: E1208 09:40:20.700417 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:40:21 crc kubenswrapper[4662]: I1208 09:40:21.043561 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:21 crc kubenswrapper[4662]: I1208 09:40:21.097102 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq9pb"] Dec 08 09:40:23 crc kubenswrapper[4662]: I1208 09:40:23.008020 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nq9pb" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="registry-server" containerID="cri-o://ee4dfddd6767a1ff1b032229b590c64c4eb94086daf9ed08e1a804e3cb9a45cb" gracePeriod=2 Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.018331 4662 generic.go:334] "Generic (PLEG): container finished" podID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerID="ee4dfddd6767a1ff1b032229b590c64c4eb94086daf9ed08e1a804e3cb9a45cb" exitCode=0 Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.018389 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq9pb" event={"ID":"a1bfd179-7bd5-4cdf-af38-ec557ceff479","Type":"ContainerDied","Data":"ee4dfddd6767a1ff1b032229b590c64c4eb94086daf9ed08e1a804e3cb9a45cb"} Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.018704 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nq9pb" event={"ID":"a1bfd179-7bd5-4cdf-af38-ec557ceff479","Type":"ContainerDied","Data":"75478951c696e89d38fe424ad03de1e1e46059e987d16324a974dd7e923f3225"} Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.018723 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75478951c696e89d38fe424ad03de1e1e46059e987d16324a974dd7e923f3225" Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.053713 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.100651 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qffhj\" (UniqueName: \"kubernetes.io/projected/a1bfd179-7bd5-4cdf-af38-ec557ceff479-kube-api-access-qffhj\") pod \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.100704 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-catalog-content\") pod \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.100815 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-utilities\") pod \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\" (UID: \"a1bfd179-7bd5-4cdf-af38-ec557ceff479\") " Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.101651 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-utilities" (OuterVolumeSpecName: "utilities") pod "a1bfd179-7bd5-4cdf-af38-ec557ceff479" (UID: "a1bfd179-7bd5-4cdf-af38-ec557ceff479"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.108270 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1bfd179-7bd5-4cdf-af38-ec557ceff479-kube-api-access-qffhj" (OuterVolumeSpecName: "kube-api-access-qffhj") pod "a1bfd179-7bd5-4cdf-af38-ec557ceff479" (UID: "a1bfd179-7bd5-4cdf-af38-ec557ceff479"). InnerVolumeSpecName "kube-api-access-qffhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.126762 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a1bfd179-7bd5-4cdf-af38-ec557ceff479" (UID: "a1bfd179-7bd5-4cdf-af38-ec557ceff479"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.203379 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qffhj\" (UniqueName: \"kubernetes.io/projected/a1bfd179-7bd5-4cdf-af38-ec557ceff479-kube-api-access-qffhj\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.203701 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:24 crc kubenswrapper[4662]: I1208 09:40:24.203711 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a1bfd179-7bd5-4cdf-af38-ec557ceff479-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:40:25 crc kubenswrapper[4662]: I1208 09:40:25.030669 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nq9pb" Dec 08 09:40:25 crc kubenswrapper[4662]: I1208 09:40:25.071031 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq9pb"] Dec 08 09:40:25 crc kubenswrapper[4662]: I1208 09:40:25.084865 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nq9pb"] Dec 08 09:40:26 crc kubenswrapper[4662]: I1208 09:40:26.708923 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" path="/var/lib/kubelet/pods/a1bfd179-7bd5-4cdf-af38-ec557ceff479/volumes" Dec 08 09:40:34 crc kubenswrapper[4662]: I1208 09:40:34.051538 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-gtrsb"] Dec 08 09:40:34 crc kubenswrapper[4662]: I1208 09:40:34.063663 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-9489-account-create-update-pswtb"] Dec 08 09:40:34 crc kubenswrapper[4662]: I1208 09:40:34.074637 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-gtrsb"] Dec 08 09:40:34 crc kubenswrapper[4662]: I1208 09:40:34.082582 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-9489-account-create-update-pswtb"] Dec 08 09:40:34 crc kubenswrapper[4662]: I1208 09:40:34.698028 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:40:34 crc kubenswrapper[4662]: E1208 09:40:34.698395 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:40:34 crc kubenswrapper[4662]: I1208 09:40:34.707464 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54617dbd-843e-4d2e-bae8-f2435e3d7ad3" path="/var/lib/kubelet/pods/54617dbd-843e-4d2e-bae8-f2435e3d7ad3/volumes" Dec 08 09:40:34 crc kubenswrapper[4662]: I1208 09:40:34.708233 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6447eae0-9785-4301-9b23-ea37ecd5317b" path="/var/lib/kubelet/pods/6447eae0-9785-4301-9b23-ea37ecd5317b/volumes" Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.043229 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-k7kv5"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.051484 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-72f8-account-create-update-ddjnb"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.068577 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-c54f-account-create-update-zz4hp"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.081780 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-k7kv5"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.089210 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-vh2cr"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.097035 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-72f8-account-create-update-ddjnb"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.104335 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-c54f-account-create-update-zz4hp"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.112313 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-vh2cr"] Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.712012 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1faf9f8b-598e-4015-ae89-1e289ef305d9" path="/var/lib/kubelet/pods/1faf9f8b-598e-4015-ae89-1e289ef305d9/volumes" Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.713146 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="355ee0bb-eab4-4a74-b65d-591809af525a" path="/var/lib/kubelet/pods/355ee0bb-eab4-4a74-b65d-591809af525a/volumes" Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.713878 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c05fb17-8432-4eaa-a64c-a1255b17bdce" path="/var/lib/kubelet/pods/4c05fb17-8432-4eaa-a64c-a1255b17bdce/volumes" Dec 08 09:40:40 crc kubenswrapper[4662]: I1208 09:40:40.714788 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be309358-6c47-42fb-b818-014b339d4a53" path="/var/lib/kubelet/pods/be309358-6c47-42fb-b818-014b339d4a53/volumes" Dec 08 09:40:47 crc kubenswrapper[4662]: I1208 09:40:47.403484 4662 scope.go:117] "RemoveContainer" containerID="325e7cc3c5b334b9c9056447d1db7b418f2abf0d8b5f6eea86e564b8f8cdf2e4" Dec 08 09:40:47 crc kubenswrapper[4662]: I1208 09:40:47.424830 4662 scope.go:117] "RemoveContainer" containerID="1c50b92edc817a5c8cf63eb760f971f87c8fbc9154f36e8508ac8fa546731fbe" Dec 08 09:40:47 crc kubenswrapper[4662]: I1208 09:40:47.470154 4662 scope.go:117] "RemoveContainer" containerID="d0fa5c4d4187dc0dd9998a0a6fb189d7eea1cf8f447e07b923578cbeee487553" Dec 08 09:40:47 crc kubenswrapper[4662]: I1208 09:40:47.513887 4662 scope.go:117] "RemoveContainer" containerID="7a7f9377c6401cc5a226d488de2b8f11c315ae55a5d8b50d5b570c8921dddc52" Dec 08 09:40:47 crc kubenswrapper[4662]: I1208 09:40:47.557778 4662 scope.go:117] "RemoveContainer" containerID="17a0de156814f21d842431fd8b301bf727b7a0c13d1bc52e550807d6fd995e39" Dec 08 09:40:47 crc kubenswrapper[4662]: I1208 09:40:47.616001 4662 scope.go:117] "RemoveContainer" containerID="a1f2ac266490b81ee4e7884bd29c2a4bf03a8dea8a4f37e94f63c90fc662992d" Dec 08 09:40:48 crc kubenswrapper[4662]: I1208 09:40:48.697704 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:40:48 crc kubenswrapper[4662]: E1208 09:40:48.698278 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:41:02 crc kubenswrapper[4662]: I1208 09:41:02.697977 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:41:02 crc kubenswrapper[4662]: E1208 09:41:02.698671 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.047118 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-pwrws"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.074124 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ba3-account-create-update-8hzfj"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.083752 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-39b5-account-create-update-296w8"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.095189 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-nwbcp"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.108321 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-7655-account-create-update-vv749"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.118892 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-39b5-account-create-update-296w8"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.126921 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-pwrws"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.135813 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-7655-account-create-update-vv749"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.143424 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-z9n8v"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.151846 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6ba3-account-create-update-8hzfj"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.159396 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-nwbcp"] Dec 08 09:41:03 crc kubenswrapper[4662]: I1208 09:41:03.169003 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-z9n8v"] Dec 08 09:41:04 crc kubenswrapper[4662]: I1208 09:41:04.708981 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b9e44c6-473a-42d0-8c92-6542c91e8e1e" path="/var/lib/kubelet/pods/1b9e44c6-473a-42d0-8c92-6542c91e8e1e/volumes" Dec 08 09:41:04 crc kubenswrapper[4662]: I1208 09:41:04.709846 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="378bb91e-6330-4052-94d8-01bf45db8010" path="/var/lib/kubelet/pods/378bb91e-6330-4052-94d8-01bf45db8010/volumes" Dec 08 09:41:04 crc kubenswrapper[4662]: I1208 09:41:04.711141 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6438122c-3da2-4f70-bf28-4c184f024504" path="/var/lib/kubelet/pods/6438122c-3da2-4f70-bf28-4c184f024504/volumes" Dec 08 09:41:04 crc kubenswrapper[4662]: I1208 09:41:04.711860 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4e3ec89-606e-422b-a9e8-48f37a44cc61" path="/var/lib/kubelet/pods/b4e3ec89-606e-422b-a9e8-48f37a44cc61/volumes" Dec 08 09:41:04 crc kubenswrapper[4662]: I1208 09:41:04.712590 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5dbeb2a-e269-4ad0-8abd-dcf1547c6037" path="/var/lib/kubelet/pods/c5dbeb2a-e269-4ad0-8abd-dcf1547c6037/volumes" Dec 08 09:41:04 crc kubenswrapper[4662]: I1208 09:41:04.713385 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec279565-62e7-416e-9518-f4fb11ad219b" path="/var/lib/kubelet/pods/ec279565-62e7-416e-9518-f4fb11ad219b/volumes" Dec 08 09:41:08 crc kubenswrapper[4662]: I1208 09:41:08.035286 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5ll52"] Dec 08 09:41:08 crc kubenswrapper[4662]: I1208 09:41:08.051507 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5ll52"] Dec 08 09:41:08 crc kubenswrapper[4662]: I1208 09:41:08.707725 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70d93045-d92f-482f-af8d-97f6f752703b" path="/var/lib/kubelet/pods/70d93045-d92f-482f-af8d-97f6f752703b/volumes" Dec 08 09:41:11 crc kubenswrapper[4662]: I1208 09:41:11.433950 4662 generic.go:334] "Generic (PLEG): container finished" podID="c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c" containerID="ab13f20d4f44318f836eb31d812655c1680010af05bc1bae1d6ae71f77e7fbb7" exitCode=0 Dec 08 09:41:11 crc kubenswrapper[4662]: I1208 09:41:11.434025 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" event={"ID":"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c","Type":"ContainerDied","Data":"ab13f20d4f44318f836eb31d812655c1680010af05bc1bae1d6ae71f77e7fbb7"} Dec 08 09:41:12 crc kubenswrapper[4662]: I1208 09:41:12.854720 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:41:12 crc kubenswrapper[4662]: I1208 09:41:12.918712 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-inventory\") pod \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " Dec 08 09:41:12 crc kubenswrapper[4662]: I1208 09:41:12.918856 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hscfd\" (UniqueName: \"kubernetes.io/projected/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-kube-api-access-hscfd\") pod \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " Dec 08 09:41:12 crc kubenswrapper[4662]: I1208 09:41:12.919208 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-ssh-key\") pod \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\" (UID: \"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c\") " Dec 08 09:41:12 crc kubenswrapper[4662]: I1208 09:41:12.926257 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-kube-api-access-hscfd" (OuterVolumeSpecName: "kube-api-access-hscfd") pod "c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c" (UID: "c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c"). InnerVolumeSpecName "kube-api-access-hscfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:41:12 crc kubenswrapper[4662]: I1208 09:41:12.953915 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c" (UID: "c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:41:12 crc kubenswrapper[4662]: I1208 09:41:12.958533 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-inventory" (OuterVolumeSpecName: "inventory") pod "c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c" (UID: "c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.021585 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.021631 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hscfd\" (UniqueName: \"kubernetes.io/projected/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-kube-api-access-hscfd\") on node \"crc\" DevicePath \"\"" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.021643 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.450961 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" event={"ID":"c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c","Type":"ContainerDied","Data":"e2bcc3099dc798385aaf90c2a02329a582231108346d5e71f3a51c875af00747"} Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.451228 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e2bcc3099dc798385aaf90c2a02329a582231108346d5e71f3a51c875af00747" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.451047 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-q42zc" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.544627 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n"] Dec 08 09:41:13 crc kubenswrapper[4662]: E1208 09:41:13.545058 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.545077 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 08 09:41:13 crc kubenswrapper[4662]: E1208 09:41:13.545095 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="extract-utilities" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.545101 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="extract-utilities" Dec 08 09:41:13 crc kubenswrapper[4662]: E1208 09:41:13.545116 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="extract-content" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.545121 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="extract-content" Dec 08 09:41:13 crc kubenswrapper[4662]: E1208 09:41:13.545139 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="registry-server" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.545145 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="registry-server" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.545319 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1bfd179-7bd5-4cdf-af38-ec557ceff479" containerName="registry-server" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.545338 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.545893 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.548209 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.548682 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.548901 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.549119 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.561588 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n"] Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.631968 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.632037 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncsc\" (UniqueName: \"kubernetes.io/projected/c10c6537-3808-47c8-bf89-01e1bff9f51e-kube-api-access-gncsc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.632254 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.734400 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.734476 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gncsc\" (UniqueName: \"kubernetes.io/projected/c10c6537-3808-47c8-bf89-01e1bff9f51e-kube-api-access-gncsc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.734539 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.740229 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.748300 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.753764 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gncsc\" (UniqueName: \"kubernetes.io/projected/c10c6537-3808-47c8-bf89-01e1bff9f51e-kube-api-access-gncsc\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:13 crc kubenswrapper[4662]: I1208 09:41:13.866133 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:14 crc kubenswrapper[4662]: I1208 09:41:14.378859 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n"] Dec 08 09:41:14 crc kubenswrapper[4662]: I1208 09:41:14.458063 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" event={"ID":"c10c6537-3808-47c8-bf89-01e1bff9f51e","Type":"ContainerStarted","Data":"4b304c09ca4d13e30ad8122cbcc10ad1301cc2af88639c91a6743eb46c62da7c"} Dec 08 09:41:14 crc kubenswrapper[4662]: I1208 09:41:14.698073 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:41:14 crc kubenswrapper[4662]: E1208 09:41:14.698654 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:41:15 crc kubenswrapper[4662]: I1208 09:41:15.467454 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" event={"ID":"c10c6537-3808-47c8-bf89-01e1bff9f51e","Type":"ContainerStarted","Data":"d14f48a36e05d987f100871692651476c2c0a5d66f940b6a9b877225f6f6b650"} Dec 08 09:41:15 crc kubenswrapper[4662]: I1208 09:41:15.498632 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" podStartSLOduration=2.014556947 podStartE2EDuration="2.498607132s" podCreationTimestamp="2025-12-08 09:41:13 +0000 UTC" firstStartedPulling="2025-12-08 09:41:14.381549443 +0000 UTC m=+1597.950577433" lastFinishedPulling="2025-12-08 09:41:14.865599618 +0000 UTC m=+1598.434627618" observedRunningTime="2025-12-08 09:41:15.490016051 +0000 UTC m=+1599.059044051" watchObservedRunningTime="2025-12-08 09:41:15.498607132 +0000 UTC m=+1599.067635132" Dec 08 09:41:20 crc kubenswrapper[4662]: I1208 09:41:20.033960 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bkngw"] Dec 08 09:41:20 crc kubenswrapper[4662]: I1208 09:41:20.040345 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bkngw"] Dec 08 09:41:20 crc kubenswrapper[4662]: I1208 09:41:20.516100 4662 generic.go:334] "Generic (PLEG): container finished" podID="c10c6537-3808-47c8-bf89-01e1bff9f51e" containerID="d14f48a36e05d987f100871692651476c2c0a5d66f940b6a9b877225f6f6b650" exitCode=0 Dec 08 09:41:20 crc kubenswrapper[4662]: I1208 09:41:20.516142 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" event={"ID":"c10c6537-3808-47c8-bf89-01e1bff9f51e","Type":"ContainerDied","Data":"d14f48a36e05d987f100871692651476c2c0a5d66f940b6a9b877225f6f6b650"} Dec 08 09:41:20 crc kubenswrapper[4662]: I1208 09:41:20.717276 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e4318c6-7d18-41a3-92aa-6cbc4c99b79e" path="/var/lib/kubelet/pods/2e4318c6-7d18-41a3-92aa-6cbc4c99b79e/volumes" Dec 08 09:41:21 crc kubenswrapper[4662]: I1208 09:41:21.902813 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:21 crc kubenswrapper[4662]: I1208 09:41:21.997771 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-ssh-key\") pod \"c10c6537-3808-47c8-bf89-01e1bff9f51e\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " Dec 08 09:41:21 crc kubenswrapper[4662]: I1208 09:41:21.997971 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-inventory\") pod \"c10c6537-3808-47c8-bf89-01e1bff9f51e\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " Dec 08 09:41:21 crc kubenswrapper[4662]: I1208 09:41:21.998148 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gncsc\" (UniqueName: \"kubernetes.io/projected/c10c6537-3808-47c8-bf89-01e1bff9f51e-kube-api-access-gncsc\") pod \"c10c6537-3808-47c8-bf89-01e1bff9f51e\" (UID: \"c10c6537-3808-47c8-bf89-01e1bff9f51e\") " Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.004419 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c10c6537-3808-47c8-bf89-01e1bff9f51e-kube-api-access-gncsc" (OuterVolumeSpecName: "kube-api-access-gncsc") pod "c10c6537-3808-47c8-bf89-01e1bff9f51e" (UID: "c10c6537-3808-47c8-bf89-01e1bff9f51e"). InnerVolumeSpecName "kube-api-access-gncsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.021926 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "c10c6537-3808-47c8-bf89-01e1bff9f51e" (UID: "c10c6537-3808-47c8-bf89-01e1bff9f51e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.026701 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-inventory" (OuterVolumeSpecName: "inventory") pod "c10c6537-3808-47c8-bf89-01e1bff9f51e" (UID: "c10c6537-3808-47c8-bf89-01e1bff9f51e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.101006 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.101278 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gncsc\" (UniqueName: \"kubernetes.io/projected/c10c6537-3808-47c8-bf89-01e1bff9f51e-kube-api-access-gncsc\") on node \"crc\" DevicePath \"\"" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.101379 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/c10c6537-3808-47c8-bf89-01e1bff9f51e-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.532449 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" event={"ID":"c10c6537-3808-47c8-bf89-01e1bff9f51e","Type":"ContainerDied","Data":"4b304c09ca4d13e30ad8122cbcc10ad1301cc2af88639c91a6743eb46c62da7c"} Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.532483 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.532500 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b304c09ca4d13e30ad8122cbcc10ad1301cc2af88639c91a6743eb46c62da7c" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.650991 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt"] Dec 08 09:41:22 crc kubenswrapper[4662]: E1208 09:41:22.652303 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c10c6537-3808-47c8-bf89-01e1bff9f51e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.652329 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="c10c6537-3808-47c8-bf89-01e1bff9f51e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.652771 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="c10c6537-3808-47c8-bf89-01e1bff9f51e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.653893 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.666070 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.667133 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.667244 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.667394 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.679359 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt"] Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.727831 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.727900 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh7x6\" (UniqueName: \"kubernetes.io/projected/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-kube-api-access-kh7x6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.728022 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.830000 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.830074 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh7x6\" (UniqueName: \"kubernetes.io/projected/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-kube-api-access-kh7x6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.831232 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.835548 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.836071 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.856882 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh7x6\" (UniqueName: \"kubernetes.io/projected/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-kube-api-access-kh7x6\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-46cwt\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:22 crc kubenswrapper[4662]: I1208 09:41:22.985471 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:41:23 crc kubenswrapper[4662]: I1208 09:41:23.537421 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt"] Dec 08 09:41:23 crc kubenswrapper[4662]: I1208 09:41:23.545361 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" event={"ID":"377fa4a0-70b5-48ee-a28a-1dd0eb481e68","Type":"ContainerStarted","Data":"4ee6c1aa32275c94f3132ac876ece760605accdd720114b5c06fa88aa9e5010d"} Dec 08 09:41:24 crc kubenswrapper[4662]: I1208 09:41:24.554489 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" event={"ID":"377fa4a0-70b5-48ee-a28a-1dd0eb481e68","Type":"ContainerStarted","Data":"e302684c9d27724c8959cc9ba431f79309d2c13374560af0b79ac2cce5abc258"} Dec 08 09:41:24 crc kubenswrapper[4662]: I1208 09:41:24.578963 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" podStartSLOduration=2.126390037 podStartE2EDuration="2.578945487s" podCreationTimestamp="2025-12-08 09:41:22 +0000 UTC" firstStartedPulling="2025-12-08 09:41:23.523490632 +0000 UTC m=+1607.092518622" lastFinishedPulling="2025-12-08 09:41:23.976046082 +0000 UTC m=+1607.545074072" observedRunningTime="2025-12-08 09:41:24.574198339 +0000 UTC m=+1608.143226349" watchObservedRunningTime="2025-12-08 09:41:24.578945487 +0000 UTC m=+1608.147973477" Dec 08 09:41:27 crc kubenswrapper[4662]: I1208 09:41:27.698840 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:41:27 crc kubenswrapper[4662]: E1208 09:41:27.699938 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:41:38 crc kubenswrapper[4662]: I1208 09:41:38.697106 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:41:38 crc kubenswrapper[4662]: E1208 09:41:38.698812 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:41:47 crc kubenswrapper[4662]: I1208 09:41:47.808043 4662 scope.go:117] "RemoveContainer" containerID="14f077d92ddc09cef1de8eeec62a488b7bd527f6f31a97628b79914b8e09ecf1" Dec 08 09:41:47 crc kubenswrapper[4662]: I1208 09:41:47.834958 4662 scope.go:117] "RemoveContainer" containerID="e8e6e80130922278f8705eed3367f4d58a1013041839492bb9cd1297585f2608" Dec 08 09:41:47 crc kubenswrapper[4662]: I1208 09:41:47.929987 4662 scope.go:117] "RemoveContainer" containerID="79eac6f16770799e1eec94f762154403a6b4e608833cfbe2f9acd1b3a4bcb929" Dec 08 09:41:47 crc kubenswrapper[4662]: I1208 09:41:47.961190 4662 scope.go:117] "RemoveContainer" containerID="05535c9b3048a3a46b6fc0c83b08b9c1f57d7ed7a0f2b336445a82a4181a7a41" Dec 08 09:41:48 crc kubenswrapper[4662]: I1208 09:41:48.003259 4662 scope.go:117] "RemoveContainer" containerID="ff796602639723d2bb13027221f63eeab1fbcd492d5ab690e5cfc66db60b6f26" Dec 08 09:41:48 crc kubenswrapper[4662]: I1208 09:41:48.050245 4662 scope.go:117] "RemoveContainer" containerID="0fdeca8043e91d3ea9633100c4c128efca4e9a1a499252a06dd77f8ad170b14e" Dec 08 09:41:48 crc kubenswrapper[4662]: I1208 09:41:48.107806 4662 scope.go:117] "RemoveContainer" containerID="6fb3629dfc4441ef54b396654da65ab66742f145ab9d0e540fe40bb64c8de941" Dec 08 09:41:48 crc kubenswrapper[4662]: I1208 09:41:48.126681 4662 scope.go:117] "RemoveContainer" containerID="ff2580b16c2638a6f12d2169f75eb8b9e05aa8365acccb82f8220248c9bfc2e9" Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.045914 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-4v9mq"] Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.054751 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-4v9mq"] Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.063622 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-r6jqp"] Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.070974 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-cgx89"] Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.078694 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-cgx89"] Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.086022 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-r6jqp"] Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.710617 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34dda116-fa4c-44db-bfd6-0077e8060c33" path="/var/lib/kubelet/pods/34dda116-fa4c-44db-bfd6-0077e8060c33/volumes" Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.711488 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46681e20-4717-4812-9c6e-b98bd8630c4c" path="/var/lib/kubelet/pods/46681e20-4717-4812-9c6e-b98bd8630c4c/volumes" Dec 08 09:41:50 crc kubenswrapper[4662]: I1208 09:41:50.712237 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2234708-fc8c-42ec-91f3-8dfaffca750e" path="/var/lib/kubelet/pods/d2234708-fc8c-42ec-91f3-8dfaffca750e/volumes" Dec 08 09:41:52 crc kubenswrapper[4662]: I1208 09:41:52.698301 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:41:52 crc kubenswrapper[4662]: E1208 09:41:52.698798 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:42:03 crc kubenswrapper[4662]: I1208 09:42:03.698031 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:42:03 crc kubenswrapper[4662]: E1208 09:42:03.698985 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:42:06 crc kubenswrapper[4662]: I1208 09:42:06.027379 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-xd547"] Dec 08 09:42:06 crc kubenswrapper[4662]: I1208 09:42:06.034071 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-xd547"] Dec 08 09:42:06 crc kubenswrapper[4662]: I1208 09:42:06.708035 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3cedcd2-d44c-4c45-acc9-384d45424740" path="/var/lib/kubelet/pods/a3cedcd2-d44c-4c45-acc9-384d45424740/volumes" Dec 08 09:42:07 crc kubenswrapper[4662]: I1208 09:42:07.930904 4662 generic.go:334] "Generic (PLEG): container finished" podID="377fa4a0-70b5-48ee-a28a-1dd0eb481e68" containerID="e302684c9d27724c8959cc9ba431f79309d2c13374560af0b79ac2cce5abc258" exitCode=0 Dec 08 09:42:07 crc kubenswrapper[4662]: I1208 09:42:07.930978 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" event={"ID":"377fa4a0-70b5-48ee-a28a-1dd0eb481e68","Type":"ContainerDied","Data":"e302684c9d27724c8959cc9ba431f79309d2c13374560af0b79ac2cce5abc258"} Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.033850 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-z4wmv"] Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.045367 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-z4wmv"] Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.361397 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.504364 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-ssh-key\") pod \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.504463 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kh7x6\" (UniqueName: \"kubernetes.io/projected/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-kube-api-access-kh7x6\") pod \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.504693 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-inventory\") pod \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\" (UID: \"377fa4a0-70b5-48ee-a28a-1dd0eb481e68\") " Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.521131 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-kube-api-access-kh7x6" (OuterVolumeSpecName: "kube-api-access-kh7x6") pod "377fa4a0-70b5-48ee-a28a-1dd0eb481e68" (UID: "377fa4a0-70b5-48ee-a28a-1dd0eb481e68"). InnerVolumeSpecName "kube-api-access-kh7x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.535800 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "377fa4a0-70b5-48ee-a28a-1dd0eb481e68" (UID: "377fa4a0-70b5-48ee-a28a-1dd0eb481e68"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.537781 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-inventory" (OuterVolumeSpecName: "inventory") pod "377fa4a0-70b5-48ee-a28a-1dd0eb481e68" (UID: "377fa4a0-70b5-48ee-a28a-1dd0eb481e68"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.606788 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.606831 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.606843 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kh7x6\" (UniqueName: \"kubernetes.io/projected/377fa4a0-70b5-48ee-a28a-1dd0eb481e68-kube-api-access-kh7x6\") on node \"crc\" DevicePath \"\"" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.949122 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" event={"ID":"377fa4a0-70b5-48ee-a28a-1dd0eb481e68","Type":"ContainerDied","Data":"4ee6c1aa32275c94f3132ac876ece760605accdd720114b5c06fa88aa9e5010d"} Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.949440 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-46cwt" Dec 08 09:42:09 crc kubenswrapper[4662]: I1208 09:42:09.949457 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ee6c1aa32275c94f3132ac876ece760605accdd720114b5c06fa88aa9e5010d" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.048889 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh"] Dec 08 09:42:10 crc kubenswrapper[4662]: E1208 09:42:10.049372 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="377fa4a0-70b5-48ee-a28a-1dd0eb481e68" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.049390 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="377fa4a0-70b5-48ee-a28a-1dd0eb481e68" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.049591 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="377fa4a0-70b5-48ee-a28a-1dd0eb481e68" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.050311 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.055057 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.055386 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.055657 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.055920 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.058703 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh"] Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.120842 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k96h7\" (UniqueName: \"kubernetes.io/projected/2f22774e-b0ec-4317-8085-9ff29aed798d-kube-api-access-k96h7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.120975 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.121012 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.223326 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.223472 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k96h7\" (UniqueName: \"kubernetes.io/projected/2f22774e-b0ec-4317-8085-9ff29aed798d-kube-api-access-k96h7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.223570 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.232571 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.236525 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.242781 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k96h7\" (UniqueName: \"kubernetes.io/projected/2f22774e-b0ec-4317-8085-9ff29aed798d-kube-api-access-k96h7\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:10 crc kubenswrapper[4662]: I1208 09:42:10.371968 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:11 crc kubenswrapper[4662]: I1208 09:42:10.715068 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc" path="/var/lib/kubelet/pods/74a3b1e6-10ba-4951-9aa3-ec2173c7e0dc/volumes" Dec 08 09:42:11 crc kubenswrapper[4662]: I1208 09:42:10.890082 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh"] Dec 08 09:42:11 crc kubenswrapper[4662]: I1208 09:42:10.957959 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" event={"ID":"2f22774e-b0ec-4317-8085-9ff29aed798d","Type":"ContainerStarted","Data":"d048116d1ee323c6e32826fbd58881356927ee7908411da6687e3932c9dfb66c"} Dec 08 09:42:11 crc kubenswrapper[4662]: I1208 09:42:11.976065 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" event={"ID":"2f22774e-b0ec-4317-8085-9ff29aed798d","Type":"ContainerStarted","Data":"cc395082a529b3b9de844d879258c5ba6f787f788accd027260306ae3b4db4aa"} Dec 08 09:42:12 crc kubenswrapper[4662]: I1208 09:42:12.000326 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" podStartSLOduration=1.590070514 podStartE2EDuration="2.000306239s" podCreationTimestamp="2025-12-08 09:42:10 +0000 UTC" firstStartedPulling="2025-12-08 09:42:10.908066719 +0000 UTC m=+1654.477094709" lastFinishedPulling="2025-12-08 09:42:11.318302444 +0000 UTC m=+1654.887330434" observedRunningTime="2025-12-08 09:42:11.996212669 +0000 UTC m=+1655.565240649" watchObservedRunningTime="2025-12-08 09:42:12.000306239 +0000 UTC m=+1655.569334239" Dec 08 09:42:16 crc kubenswrapper[4662]: I1208 09:42:16.007275 4662 generic.go:334] "Generic (PLEG): container finished" podID="2f22774e-b0ec-4317-8085-9ff29aed798d" containerID="cc395082a529b3b9de844d879258c5ba6f787f788accd027260306ae3b4db4aa" exitCode=0 Dec 08 09:42:16 crc kubenswrapper[4662]: I1208 09:42:16.007365 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" event={"ID":"2f22774e-b0ec-4317-8085-9ff29aed798d","Type":"ContainerDied","Data":"cc395082a529b3b9de844d879258c5ba6f787f788accd027260306ae3b4db4aa"} Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.405518 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.449417 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-inventory\") pod \"2f22774e-b0ec-4317-8085-9ff29aed798d\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.449487 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-ssh-key\") pod \"2f22774e-b0ec-4317-8085-9ff29aed798d\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.449582 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k96h7\" (UniqueName: \"kubernetes.io/projected/2f22774e-b0ec-4317-8085-9ff29aed798d-kube-api-access-k96h7\") pod \"2f22774e-b0ec-4317-8085-9ff29aed798d\" (UID: \"2f22774e-b0ec-4317-8085-9ff29aed798d\") " Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.455469 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f22774e-b0ec-4317-8085-9ff29aed798d-kube-api-access-k96h7" (OuterVolumeSpecName: "kube-api-access-k96h7") pod "2f22774e-b0ec-4317-8085-9ff29aed798d" (UID: "2f22774e-b0ec-4317-8085-9ff29aed798d"). InnerVolumeSpecName "kube-api-access-k96h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.477932 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2f22774e-b0ec-4317-8085-9ff29aed798d" (UID: "2f22774e-b0ec-4317-8085-9ff29aed798d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.483190 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-inventory" (OuterVolumeSpecName: "inventory") pod "2f22774e-b0ec-4317-8085-9ff29aed798d" (UID: "2f22774e-b0ec-4317-8085-9ff29aed798d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.551385 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.551589 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k96h7\" (UniqueName: \"kubernetes.io/projected/2f22774e-b0ec-4317-8085-9ff29aed798d-kube-api-access-k96h7\") on node \"crc\" DevicePath \"\"" Dec 08 09:42:17 crc kubenswrapper[4662]: I1208 09:42:17.551599 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2f22774e-b0ec-4317-8085-9ff29aed798d-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.032813 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" event={"ID":"2f22774e-b0ec-4317-8085-9ff29aed798d","Type":"ContainerDied","Data":"d048116d1ee323c6e32826fbd58881356927ee7908411da6687e3932c9dfb66c"} Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.032928 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d048116d1ee323c6e32826fbd58881356927ee7908411da6687e3932c9dfb66c" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.032843 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.203334 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7"] Dec 08 09:42:18 crc kubenswrapper[4662]: E1208 09:42:18.203817 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f22774e-b0ec-4317-8085-9ff29aed798d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.203839 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f22774e-b0ec-4317-8085-9ff29aed798d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.204049 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f22774e-b0ec-4317-8085-9ff29aed798d" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.210532 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.214984 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.215064 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.216772 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.217171 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.220816 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7"] Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.266007 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.266065 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-985bj\" (UniqueName: \"kubernetes.io/projected/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-kube-api-access-985bj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.266086 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.367761 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.367835 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-985bj\" (UniqueName: \"kubernetes.io/projected/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-kube-api-access-985bj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.367860 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.380381 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.404318 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-985bj\" (UniqueName: \"kubernetes.io/projected/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-kube-api-access-985bj\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.410301 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.525203 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:42:18 crc kubenswrapper[4662]: I1208 09:42:18.698377 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:42:18 crc kubenswrapper[4662]: E1208 09:42:18.698863 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:42:19 crc kubenswrapper[4662]: I1208 09:42:19.066223 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7"] Dec 08 09:42:20 crc kubenswrapper[4662]: I1208 09:42:20.050808 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" event={"ID":"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd","Type":"ContainerStarted","Data":"435b6e815c7c811c53f44f040b5c5ac7153569feeec53e18d83c3dbf7a9600e4"} Dec 08 09:42:20 crc kubenswrapper[4662]: I1208 09:42:20.051168 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" event={"ID":"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd","Type":"ContainerStarted","Data":"7edeb0e250a7e691a773aa63ff270190b0ccaa4977911eeb7adc938f5a05748b"} Dec 08 09:42:20 crc kubenswrapper[4662]: I1208 09:42:20.075032 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" podStartSLOduration=1.572377395 podStartE2EDuration="2.075010637s" podCreationTimestamp="2025-12-08 09:42:18 +0000 UTC" firstStartedPulling="2025-12-08 09:42:19.079397154 +0000 UTC m=+1662.648425154" lastFinishedPulling="2025-12-08 09:42:19.582030406 +0000 UTC m=+1663.151058396" observedRunningTime="2025-12-08 09:42:20.07031001 +0000 UTC m=+1663.639338010" watchObservedRunningTime="2025-12-08 09:42:20.075010637 +0000 UTC m=+1663.644038627" Dec 08 09:42:31 crc kubenswrapper[4662]: I1208 09:42:31.698104 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:42:31 crc kubenswrapper[4662]: E1208 09:42:31.699033 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:42:44 crc kubenswrapper[4662]: I1208 09:42:44.697872 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:42:44 crc kubenswrapper[4662]: E1208 09:42:44.698735 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.076828 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-d829v"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.086813 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-0230-account-create-update-mqp6s"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.094146 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-c281-account-create-update-gjq7l"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.101168 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-vpsr8"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.107149 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bmvjs"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.113369 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-d829v"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.119676 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bmvjs"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.126123 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-c281-account-create-update-gjq7l"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.132448 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-f69b-account-create-update-q4s95"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.139816 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-vpsr8"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.146135 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-0230-account-create-update-mqp6s"] Dec 08 09:42:45 crc kubenswrapper[4662]: I1208 09:42:45.153173 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-f69b-account-create-update-q4s95"] Dec 08 09:42:46 crc kubenswrapper[4662]: I1208 09:42:46.714777 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24937f1a-fdea-4383-9c39-885ee36af08c" path="/var/lib/kubelet/pods/24937f1a-fdea-4383-9c39-885ee36af08c/volumes" Dec 08 09:42:46 crc kubenswrapper[4662]: I1208 09:42:46.715924 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2db14df8-28f0-42fc-9891-361afd774445" path="/var/lib/kubelet/pods/2db14df8-28f0-42fc-9891-361afd774445/volumes" Dec 08 09:42:46 crc kubenswrapper[4662]: I1208 09:42:46.716861 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="433f7180-bb32-4bf5-b1d2-c75388f8011d" path="/var/lib/kubelet/pods/433f7180-bb32-4bf5-b1d2-c75388f8011d/volumes" Dec 08 09:42:46 crc kubenswrapper[4662]: I1208 09:42:46.718588 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8409e9a3-56e3-49a4-b270-ee8a2493fa75" path="/var/lib/kubelet/pods/8409e9a3-56e3-49a4-b270-ee8a2493fa75/volumes" Dec 08 09:42:46 crc kubenswrapper[4662]: I1208 09:42:46.720273 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9" path="/var/lib/kubelet/pods/c23dbbf3-eb06-4caf-a5e0-1d9e8aa226b9/volumes" Dec 08 09:42:46 crc kubenswrapper[4662]: I1208 09:42:46.721173 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd4aa2f3-f0db-4855-b972-e077877518c6" path="/var/lib/kubelet/pods/dd4aa2f3-f0db-4855-b972-e077877518c6/volumes" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.267610 4662 scope.go:117] "RemoveContainer" containerID="83ea04b4b478f25e9cd3eb5fb31f0eeeac687aa6085a5b6fd6865d2f8070578e" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.299844 4662 scope.go:117] "RemoveContainer" containerID="18d70e7d42db0da9422e825c48afe254259ace2db7fd040215b85113b978379b" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.371372 4662 scope.go:117] "RemoveContainer" containerID="964b26ff1d4b27498f7b3742cb7933e2e681e00a76ee69de3c28f2c03b70417c" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.411667 4662 scope.go:117] "RemoveContainer" containerID="f8d43661c07631e67a33948ce1fc3e9a45e883933da511c18d23c356d79da360" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.472447 4662 scope.go:117] "RemoveContainer" containerID="8b8b6233108b9055a6df0a863c65dfafcb8b258b17df895e5a73a780d1b4d15f" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.511493 4662 scope.go:117] "RemoveContainer" containerID="691ff21b5113c8d07d1dcea6a0edc95bddef5fdadfcb0747546fe9d0c0116cee" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.540595 4662 scope.go:117] "RemoveContainer" containerID="cf96b00d2bdd8da17c0a878be900be2ec1561eccfd406dc199e3bd02da6acbe2" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.562528 4662 scope.go:117] "RemoveContainer" containerID="7b91fb0d88656350ea391f276eda282c32b39f73188722fb29daddc51bb83697" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.593387 4662 scope.go:117] "RemoveContainer" containerID="d55f2ed09163d68f4e9c7180a9aa7b91dc839496296c1d3288141dde11f63ea1" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.635041 4662 scope.go:117] "RemoveContainer" containerID="e42c1551583bc477936dddba8be4f7a53c2e78749fc95eb0becf2910232976c0" Dec 08 09:42:48 crc kubenswrapper[4662]: I1208 09:42:48.682189 4662 scope.go:117] "RemoveContainer" containerID="5666cf5452ee905ac68ada7f3892549c2edeb8bb0887c3e2988493820d357040" Dec 08 09:42:58 crc kubenswrapper[4662]: I1208 09:42:58.698149 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:42:58 crc kubenswrapper[4662]: E1208 09:42:58.698820 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:43:10 crc kubenswrapper[4662]: I1208 09:43:10.698160 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:43:10 crc kubenswrapper[4662]: E1208 09:43:10.698941 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:43:18 crc kubenswrapper[4662]: I1208 09:43:18.547949 4662 generic.go:334] "Generic (PLEG): container finished" podID="d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd" containerID="435b6e815c7c811c53f44f040b5c5ac7153569feeec53e18d83c3dbf7a9600e4" exitCode=0 Dec 08 09:43:18 crc kubenswrapper[4662]: I1208 09:43:18.548189 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" event={"ID":"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd","Type":"ContainerDied","Data":"435b6e815c7c811c53f44f040b5c5ac7153569feeec53e18d83c3dbf7a9600e4"} Dec 08 09:43:19 crc kubenswrapper[4662]: I1208 09:43:19.967314 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.111383 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-inventory\") pod \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.111721 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-985bj\" (UniqueName: \"kubernetes.io/projected/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-kube-api-access-985bj\") pod \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.112007 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-ssh-key\") pod \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\" (UID: \"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd\") " Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.116909 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-kube-api-access-985bj" (OuterVolumeSpecName: "kube-api-access-985bj") pod "d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd" (UID: "d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd"). InnerVolumeSpecName "kube-api-access-985bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.135905 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd" (UID: "d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.136304 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-inventory" (OuterVolumeSpecName: "inventory") pod "d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd" (UID: "d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.214441 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.214473 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-985bj\" (UniqueName: \"kubernetes.io/projected/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-kube-api-access-985bj\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.214485 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.570667 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.570553 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7" event={"ID":"d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd","Type":"ContainerDied","Data":"7edeb0e250a7e691a773aa63ff270190b0ccaa4977911eeb7adc938f5a05748b"} Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.572875 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7edeb0e250a7e691a773aa63ff270190b0ccaa4977911eeb7adc938f5a05748b" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.657577 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nhcfd"] Dec 08 09:43:20 crc kubenswrapper[4662]: E1208 09:43:20.658234 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.658262 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.658532 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.659599 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.662020 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.662223 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.665253 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.670096 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.687647 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nhcfd"] Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.824975 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnlml\" (UniqueName: \"kubernetes.io/projected/9b8cd462-55c7-451b-a985-85606ca5374b-kube-api-access-wnlml\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.825036 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.825384 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.927851 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.927992 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnlml\" (UniqueName: \"kubernetes.io/projected/9b8cd462-55c7-451b-a985-85606ca5374b-kube-api-access-wnlml\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.928024 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.946088 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.953207 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.960829 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnlml\" (UniqueName: \"kubernetes.io/projected/9b8cd462-55c7-451b-a985-85606ca5374b-kube-api-access-wnlml\") pod \"ssh-known-hosts-edpm-deployment-nhcfd\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:20 crc kubenswrapper[4662]: I1208 09:43:20.985782 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:21 crc kubenswrapper[4662]: I1208 09:43:21.527317 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-nhcfd"] Dec 08 09:43:21 crc kubenswrapper[4662]: W1208 09:43:21.550126 4662 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9b8cd462_55c7_451b_a985_85606ca5374b.slice/crio-70752dc551a7305b142ee883aae9d6ce4a50c18909d1bdc6f2856f049b98168b WatchSource:0}: Error finding container 70752dc551a7305b142ee883aae9d6ce4a50c18909d1bdc6f2856f049b98168b: Status 404 returned error can't find the container with id 70752dc551a7305b142ee883aae9d6ce4a50c18909d1bdc6f2856f049b98168b Dec 08 09:43:21 crc kubenswrapper[4662]: I1208 09:43:21.580855 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" event={"ID":"9b8cd462-55c7-451b-a985-85606ca5374b","Type":"ContainerStarted","Data":"70752dc551a7305b142ee883aae9d6ce4a50c18909d1bdc6f2856f049b98168b"} Dec 08 09:43:22 crc kubenswrapper[4662]: I1208 09:43:22.589619 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" event={"ID":"9b8cd462-55c7-451b-a985-85606ca5374b","Type":"ContainerStarted","Data":"cea60742433b821c57efe98b79c46610451310c894bec7001e15eb9cef2b6474"} Dec 08 09:43:22 crc kubenswrapper[4662]: I1208 09:43:22.614300 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" podStartSLOduration=2.080664184 podStartE2EDuration="2.614282762s" podCreationTimestamp="2025-12-08 09:43:20 +0000 UTC" firstStartedPulling="2025-12-08 09:43:21.553904511 +0000 UTC m=+1725.122932501" lastFinishedPulling="2025-12-08 09:43:22.087523089 +0000 UTC m=+1725.656551079" observedRunningTime="2025-12-08 09:43:22.60753483 +0000 UTC m=+1726.176562820" watchObservedRunningTime="2025-12-08 09:43:22.614282762 +0000 UTC m=+1726.183310752" Dec 08 09:43:25 crc kubenswrapper[4662]: I1208 09:43:25.698311 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:43:25 crc kubenswrapper[4662]: E1208 09:43:25.699517 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:43:29 crc kubenswrapper[4662]: I1208 09:43:29.043318 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q7bbm"] Dec 08 09:43:29 crc kubenswrapper[4662]: I1208 09:43:29.050755 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-q7bbm"] Dec 08 09:43:29 crc kubenswrapper[4662]: I1208 09:43:29.676852 4662 generic.go:334] "Generic (PLEG): container finished" podID="9b8cd462-55c7-451b-a985-85606ca5374b" containerID="cea60742433b821c57efe98b79c46610451310c894bec7001e15eb9cef2b6474" exitCode=0 Dec 08 09:43:29 crc kubenswrapper[4662]: I1208 09:43:29.676897 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" event={"ID":"9b8cd462-55c7-451b-a985-85606ca5374b","Type":"ContainerDied","Data":"cea60742433b821c57efe98b79c46610451310c894bec7001e15eb9cef2b6474"} Dec 08 09:43:30 crc kubenswrapper[4662]: I1208 09:43:30.715356 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e93e89c3-cd7d-4205-9741-ac087d1d7bd6" path="/var/lib/kubelet/pods/e93e89c3-cd7d-4205-9741-ac087d1d7bd6/volumes" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.110279 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.255698 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-ssh-key-openstack-edpm-ipam\") pod \"9b8cd462-55c7-451b-a985-85606ca5374b\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.256086 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-inventory-0\") pod \"9b8cd462-55c7-451b-a985-85606ca5374b\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.256221 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wnlml\" (UniqueName: \"kubernetes.io/projected/9b8cd462-55c7-451b-a985-85606ca5374b-kube-api-access-wnlml\") pod \"9b8cd462-55c7-451b-a985-85606ca5374b\" (UID: \"9b8cd462-55c7-451b-a985-85606ca5374b\") " Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.262684 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b8cd462-55c7-451b-a985-85606ca5374b-kube-api-access-wnlml" (OuterVolumeSpecName: "kube-api-access-wnlml") pod "9b8cd462-55c7-451b-a985-85606ca5374b" (UID: "9b8cd462-55c7-451b-a985-85606ca5374b"). InnerVolumeSpecName "kube-api-access-wnlml". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.290564 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "9b8cd462-55c7-451b-a985-85606ca5374b" (UID: "9b8cd462-55c7-451b-a985-85606ca5374b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.296495 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "9b8cd462-55c7-451b-a985-85606ca5374b" (UID: "9b8cd462-55c7-451b-a985-85606ca5374b"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.358249 4662 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-inventory-0\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.358286 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wnlml\" (UniqueName: \"kubernetes.io/projected/9b8cd462-55c7-451b-a985-85606ca5374b-kube-api-access-wnlml\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.358301 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/9b8cd462-55c7-451b-a985-85606ca5374b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.699053 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" event={"ID":"9b8cd462-55c7-451b-a985-85606ca5374b","Type":"ContainerDied","Data":"70752dc551a7305b142ee883aae9d6ce4a50c18909d1bdc6f2856f049b98168b"} Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.699086 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70752dc551a7305b142ee883aae9d6ce4a50c18909d1bdc6f2856f049b98168b" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.699109 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-nhcfd" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.796181 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm"] Dec 08 09:43:31 crc kubenswrapper[4662]: E1208 09:43:31.796548 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b8cd462-55c7-451b-a985-85606ca5374b" containerName="ssh-known-hosts-edpm-deployment" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.796561 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b8cd462-55c7-451b-a985-85606ca5374b" containerName="ssh-known-hosts-edpm-deployment" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.796795 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b8cd462-55c7-451b-a985-85606ca5374b" containerName="ssh-known-hosts-edpm-deployment" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.797354 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.801250 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.801396 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.801420 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.801708 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.822057 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm"] Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.968204 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.968280 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:31 crc kubenswrapper[4662]: I1208 09:43:31.968419 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc67k\" (UniqueName: \"kubernetes.io/projected/0aa15dee-b462-4ee2-89cc-00a38071db66-kube-api-access-hc67k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.069591 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.069867 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.070029 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc67k\" (UniqueName: \"kubernetes.io/projected/0aa15dee-b462-4ee2-89cc-00a38071db66-kube-api-access-hc67k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.076153 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.079872 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.087412 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc67k\" (UniqueName: \"kubernetes.io/projected/0aa15dee-b462-4ee2-89cc-00a38071db66-kube-api-access-hc67k\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-pzvmm\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.126326 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:32 crc kubenswrapper[4662]: I1208 09:43:32.757730 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm"] Dec 08 09:43:33 crc kubenswrapper[4662]: I1208 09:43:33.715122 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" event={"ID":"0aa15dee-b462-4ee2-89cc-00a38071db66","Type":"ContainerStarted","Data":"4457b8b338b311cda780a8a90995aedcf6c3cc732924b6f42b0a21eaa7a91fbd"} Dec 08 09:43:33 crc kubenswrapper[4662]: I1208 09:43:33.715457 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" event={"ID":"0aa15dee-b462-4ee2-89cc-00a38071db66","Type":"ContainerStarted","Data":"778fb5bce10bf94215e4bd3f76176c59f2c26b364e6c0f192fbc57fbbc4ad5b9"} Dec 08 09:43:33 crc kubenswrapper[4662]: I1208 09:43:33.737231 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" podStartSLOduration=2.3413378959999998 podStartE2EDuration="2.737206695s" podCreationTimestamp="2025-12-08 09:43:31 +0000 UTC" firstStartedPulling="2025-12-08 09:43:32.762442055 +0000 UTC m=+1736.331470045" lastFinishedPulling="2025-12-08 09:43:33.158310834 +0000 UTC m=+1736.727338844" observedRunningTime="2025-12-08 09:43:33.727874373 +0000 UTC m=+1737.296902363" watchObservedRunningTime="2025-12-08 09:43:33.737206695 +0000 UTC m=+1737.306234695" Dec 08 09:43:38 crc kubenswrapper[4662]: I1208 09:43:38.697418 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:43:38 crc kubenswrapper[4662]: E1208 09:43:38.699157 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:43:42 crc kubenswrapper[4662]: I1208 09:43:42.792187 4662 generic.go:334] "Generic (PLEG): container finished" podID="0aa15dee-b462-4ee2-89cc-00a38071db66" containerID="4457b8b338b311cda780a8a90995aedcf6c3cc732924b6f42b0a21eaa7a91fbd" exitCode=0 Dec 08 09:43:42 crc kubenswrapper[4662]: I1208 09:43:42.792268 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" event={"ID":"0aa15dee-b462-4ee2-89cc-00a38071db66","Type":"ContainerDied","Data":"4457b8b338b311cda780a8a90995aedcf6c3cc732924b6f42b0a21eaa7a91fbd"} Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.301140 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.386997 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc67k\" (UniqueName: \"kubernetes.io/projected/0aa15dee-b462-4ee2-89cc-00a38071db66-kube-api-access-hc67k\") pod \"0aa15dee-b462-4ee2-89cc-00a38071db66\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.387083 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-inventory\") pod \"0aa15dee-b462-4ee2-89cc-00a38071db66\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.387179 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-ssh-key\") pod \"0aa15dee-b462-4ee2-89cc-00a38071db66\" (UID: \"0aa15dee-b462-4ee2-89cc-00a38071db66\") " Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.405425 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0aa15dee-b462-4ee2-89cc-00a38071db66-kube-api-access-hc67k" (OuterVolumeSpecName: "kube-api-access-hc67k") pod "0aa15dee-b462-4ee2-89cc-00a38071db66" (UID: "0aa15dee-b462-4ee2-89cc-00a38071db66"). InnerVolumeSpecName "kube-api-access-hc67k". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.419401 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-inventory" (OuterVolumeSpecName: "inventory") pod "0aa15dee-b462-4ee2-89cc-00a38071db66" (UID: "0aa15dee-b462-4ee2-89cc-00a38071db66"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.420182 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0aa15dee-b462-4ee2-89cc-00a38071db66" (UID: "0aa15dee-b462-4ee2-89cc-00a38071db66"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.489060 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc67k\" (UniqueName: \"kubernetes.io/projected/0aa15dee-b462-4ee2-89cc-00a38071db66-kube-api-access-hc67k\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.489282 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.489342 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0aa15dee-b462-4ee2-89cc-00a38071db66-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.813647 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" event={"ID":"0aa15dee-b462-4ee2-89cc-00a38071db66","Type":"ContainerDied","Data":"778fb5bce10bf94215e4bd3f76176c59f2c26b364e6c0f192fbc57fbbc4ad5b9"} Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.813697 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="778fb5bce10bf94215e4bd3f76176c59f2c26b364e6c0f192fbc57fbbc4ad5b9" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.813718 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-pzvmm" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.897780 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc"] Dec 08 09:43:44 crc kubenswrapper[4662]: E1208 09:43:44.898219 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0aa15dee-b462-4ee2-89cc-00a38071db66" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.898238 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="0aa15dee-b462-4ee2-89cc-00a38071db66" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.898399 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="0aa15dee-b462-4ee2-89cc-00a38071db66" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.899082 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.903710 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.904103 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.907999 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.914344 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc"] Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.918180 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-59jf7" Dec 08 09:43:44 crc kubenswrapper[4662]: I1208 09:43:44.999667 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:44.999971 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz7v9\" (UniqueName: \"kubernetes.io/projected/3deffeb8-98a7-4f85-aa26-705bb171d886-kube-api-access-dz7v9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.000169 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.101312 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.101432 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz7v9\" (UniqueName: \"kubernetes.io/projected/3deffeb8-98a7-4f85-aa26-705bb171d886-kube-api-access-dz7v9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.101500 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.104677 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.109630 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.144784 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz7v9\" (UniqueName: \"kubernetes.io/projected/3deffeb8-98a7-4f85-aa26-705bb171d886-kube-api-access-dz7v9\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.241129 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.741779 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc"] Dec 08 09:43:45 crc kubenswrapper[4662]: I1208 09:43:45.823267 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" event={"ID":"3deffeb8-98a7-4f85-aa26-705bb171d886","Type":"ContainerStarted","Data":"5d11e3851f4e9273ac8a185c97ec1cc8bb8c814b987b733545d795ae5cb67fde"} Dec 08 09:43:46 crc kubenswrapper[4662]: I1208 09:43:46.835649 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" event={"ID":"3deffeb8-98a7-4f85-aa26-705bb171d886","Type":"ContainerStarted","Data":"8704e209a89f3f9badc52d7fcf205a38c0c787c02e0ef18b97719c4e53967006"} Dec 08 09:43:46 crc kubenswrapper[4662]: I1208 09:43:46.874499 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" podStartSLOduration=2.432019179 podStartE2EDuration="2.874475436s" podCreationTimestamp="2025-12-08 09:43:44 +0000 UTC" firstStartedPulling="2025-12-08 09:43:45.751817484 +0000 UTC m=+1749.320845474" lastFinishedPulling="2025-12-08 09:43:46.194273731 +0000 UTC m=+1749.763301731" observedRunningTime="2025-12-08 09:43:46.868214957 +0000 UTC m=+1750.437242947" watchObservedRunningTime="2025-12-08 09:43:46.874475436 +0000 UTC m=+1750.443503426" Dec 08 09:43:48 crc kubenswrapper[4662]: I1208 09:43:48.962045 4662 scope.go:117] "RemoveContainer" containerID="996f2cc86adc17feaa6089916c9026b603933b3ab7cdcbb78ec4872fce420ed8" Dec 08 09:43:49 crc kubenswrapper[4662]: I1208 09:43:49.698165 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:43:49 crc kubenswrapper[4662]: E1208 09:43:49.698471 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:43:55 crc kubenswrapper[4662]: I1208 09:43:55.045129 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-p72s9"] Dec 08 09:43:55 crc kubenswrapper[4662]: I1208 09:43:55.055482 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-p72s9"] Dec 08 09:43:56 crc kubenswrapper[4662]: I1208 09:43:56.029670 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cglm9"] Dec 08 09:43:56 crc kubenswrapper[4662]: I1208 09:43:56.036783 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-cglm9"] Dec 08 09:43:56 crc kubenswrapper[4662]: I1208 09:43:56.722607 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2943a258-ba1d-4a9d-a6c9-e1817b52d458" path="/var/lib/kubelet/pods/2943a258-ba1d-4a9d-a6c9-e1817b52d458/volumes" Dec 08 09:43:56 crc kubenswrapper[4662]: I1208 09:43:56.727045 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="648152ca-1c66-4843-ad6d-20450aa26819" path="/var/lib/kubelet/pods/648152ca-1c66-4843-ad6d-20450aa26819/volumes" Dec 08 09:43:56 crc kubenswrapper[4662]: I1208 09:43:56.942867 4662 generic.go:334] "Generic (PLEG): container finished" podID="3deffeb8-98a7-4f85-aa26-705bb171d886" containerID="8704e209a89f3f9badc52d7fcf205a38c0c787c02e0ef18b97719c4e53967006" exitCode=0 Dec 08 09:43:56 crc kubenswrapper[4662]: I1208 09:43:56.942910 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" event={"ID":"3deffeb8-98a7-4f85-aa26-705bb171d886","Type":"ContainerDied","Data":"8704e209a89f3f9badc52d7fcf205a38c0c787c02e0ef18b97719c4e53967006"} Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.362213 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.492128 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-ssh-key\") pod \"3deffeb8-98a7-4f85-aa26-705bb171d886\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.492200 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-inventory\") pod \"3deffeb8-98a7-4f85-aa26-705bb171d886\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.492323 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dz7v9\" (UniqueName: \"kubernetes.io/projected/3deffeb8-98a7-4f85-aa26-705bb171d886-kube-api-access-dz7v9\") pod \"3deffeb8-98a7-4f85-aa26-705bb171d886\" (UID: \"3deffeb8-98a7-4f85-aa26-705bb171d886\") " Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.514840 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3deffeb8-98a7-4f85-aa26-705bb171d886-kube-api-access-dz7v9" (OuterVolumeSpecName: "kube-api-access-dz7v9") pod "3deffeb8-98a7-4f85-aa26-705bb171d886" (UID: "3deffeb8-98a7-4f85-aa26-705bb171d886"). InnerVolumeSpecName "kube-api-access-dz7v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.519600 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3deffeb8-98a7-4f85-aa26-705bb171d886" (UID: "3deffeb8-98a7-4f85-aa26-705bb171d886"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.529419 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-inventory" (OuterVolumeSpecName: "inventory") pod "3deffeb8-98a7-4f85-aa26-705bb171d886" (UID: "3deffeb8-98a7-4f85-aa26-705bb171d886"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.594115 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dz7v9\" (UniqueName: \"kubernetes.io/projected/3deffeb8-98a7-4f85-aa26-705bb171d886-kube-api-access-dz7v9\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.594151 4662 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-ssh-key\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.594160 4662 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3deffeb8-98a7-4f85-aa26-705bb171d886-inventory\") on node \"crc\" DevicePath \"\"" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.960428 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" event={"ID":"3deffeb8-98a7-4f85-aa26-705bb171d886","Type":"ContainerDied","Data":"5d11e3851f4e9273ac8a185c97ec1cc8bb8c814b987b733545d795ae5cb67fde"} Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.960467 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d11e3851f4e9273ac8a185c97ec1cc8bb8c814b987b733545d795ae5cb67fde" Dec 08 09:43:58 crc kubenswrapper[4662]: I1208 09:43:58.960473 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc" Dec 08 09:44:02 crc kubenswrapper[4662]: I1208 09:44:02.894811 4662 patch_prober.go:28] interesting pod/route-controller-manager-77f4b48dbc-w5mn2 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:44:02 crc kubenswrapper[4662]: I1208 09:44:02.895474 4662 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" podUID="e878ebd7-10c2-4ddc-9c60-911f3adeccfe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 09:44:02 crc kubenswrapper[4662]: I1208 09:44:02.894902 4662 patch_prober.go:28] interesting pod/route-controller-manager-77f4b48dbc-w5mn2 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 09:44:02 crc kubenswrapper[4662]: I1208 09:44:02.895592 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-77f4b48dbc-w5mn2" podUID="e878ebd7-10c2-4ddc-9c60-911f3adeccfe" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.68:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 09:44:03 crc kubenswrapper[4662]: I1208 09:44:03.193889 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-7t9mp" podUID="63c64b29-5b57-481f-b4b2-c92498738c8a" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 09:44:04 crc kubenswrapper[4662]: I1208 09:44:04.697399 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:44:04 crc kubenswrapper[4662]: E1208 09:44:04.697955 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:44:19 crc kubenswrapper[4662]: I1208 09:44:19.697974 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:44:19 crc kubenswrapper[4662]: E1208 09:44:19.698882 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:44:32 crc kubenswrapper[4662]: I1208 09:44:32.697322 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:44:33 crc kubenswrapper[4662]: I1208 09:44:33.331313 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"1e6c4becf03c0eb64cc76451dcb0b9d5535374c59a72e6a7c92ee83afc741d04"} Dec 08 09:44:38 crc kubenswrapper[4662]: I1208 09:44:38.060072 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-fp672"] Dec 08 09:44:38 crc kubenswrapper[4662]: I1208 09:44:38.074687 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-fp672"] Dec 08 09:44:38 crc kubenswrapper[4662]: I1208 09:44:38.710174 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ceb80a9-e524-4d98-87dd-ccd850c4b1ce" path="/var/lib/kubelet/pods/3ceb80a9-e524-4d98-87dd-ccd850c4b1ce/volumes" Dec 08 09:44:49 crc kubenswrapper[4662]: I1208 09:44:49.037065 4662 scope.go:117] "RemoveContainer" containerID="21a9d9ef05bbc53b027c9da0065ab413149dce48eb928a0f0ac748a97fd15e4c" Dec 08 09:44:49 crc kubenswrapper[4662]: I1208 09:44:49.082041 4662 scope.go:117] "RemoveContainer" containerID="4468eb7e87a0280c824d0909a775014426c3b2d69d19645dfbba2d0b9c45c859" Dec 08 09:44:49 crc kubenswrapper[4662]: I1208 09:44:49.137954 4662 scope.go:117] "RemoveContainer" containerID="822152dc55f3b3036e5ec659c66c2da5b01b133b23148256f51877c8e343d822" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.170588 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx"] Dec 08 09:45:00 crc kubenswrapper[4662]: E1208 09:45:00.171709 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3deffeb8-98a7-4f85-aa26-705bb171d886" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.171728 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3deffeb8-98a7-4f85-aa26-705bb171d886" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.171962 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="3deffeb8-98a7-4f85-aa26-705bb171d886" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.172978 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.177302 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.177501 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.189224 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx"] Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.253946 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b81c7dc3-04b3-412b-94da-5c2792878fb2-config-volume\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.254289 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b81c7dc3-04b3-412b-94da-5c2792878fb2-secret-volume\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.254443 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64tmf\" (UniqueName: \"kubernetes.io/projected/b81c7dc3-04b3-412b-94da-5c2792878fb2-kube-api-access-64tmf\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.357059 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b81c7dc3-04b3-412b-94da-5c2792878fb2-config-volume\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.357237 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b81c7dc3-04b3-412b-94da-5c2792878fb2-secret-volume\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.357285 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b81c7dc3-04b3-412b-94da-5c2792878fb2-config-volume\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.357612 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64tmf\" (UniqueName: \"kubernetes.io/projected/b81c7dc3-04b3-412b-94da-5c2792878fb2-kube-api-access-64tmf\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.363732 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b81c7dc3-04b3-412b-94da-5c2792878fb2-secret-volume\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.374526 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64tmf\" (UniqueName: \"kubernetes.io/projected/b81c7dc3-04b3-412b-94da-5c2792878fb2-kube-api-access-64tmf\") pod \"collect-profiles-29419785-m4lwx\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.513916 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:00 crc kubenswrapper[4662]: I1208 09:45:00.962217 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx"] Dec 08 09:45:01 crc kubenswrapper[4662]: I1208 09:45:01.576043 4662 generic.go:334] "Generic (PLEG): container finished" podID="b81c7dc3-04b3-412b-94da-5c2792878fb2" containerID="67e960ea51deafc7e0523500031c2f735495be757adbf50bcb9365fbc3c2c8e9" exitCode=0 Dec 08 09:45:01 crc kubenswrapper[4662]: I1208 09:45:01.576114 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" event={"ID":"b81c7dc3-04b3-412b-94da-5c2792878fb2","Type":"ContainerDied","Data":"67e960ea51deafc7e0523500031c2f735495be757adbf50bcb9365fbc3c2c8e9"} Dec 08 09:45:01 crc kubenswrapper[4662]: I1208 09:45:01.577300 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" event={"ID":"b81c7dc3-04b3-412b-94da-5c2792878fb2","Type":"ContainerStarted","Data":"eb62b30875fa2eabd55c0efe839977084511450d167c40ac723a523451216e3e"} Dec 08 09:45:02 crc kubenswrapper[4662]: I1208 09:45:02.944024 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.104584 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b81c7dc3-04b3-412b-94da-5c2792878fb2-config-volume\") pod \"b81c7dc3-04b3-412b-94da-5c2792878fb2\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.104697 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64tmf\" (UniqueName: \"kubernetes.io/projected/b81c7dc3-04b3-412b-94da-5c2792878fb2-kube-api-access-64tmf\") pod \"b81c7dc3-04b3-412b-94da-5c2792878fb2\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.104827 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b81c7dc3-04b3-412b-94da-5c2792878fb2-secret-volume\") pod \"b81c7dc3-04b3-412b-94da-5c2792878fb2\" (UID: \"b81c7dc3-04b3-412b-94da-5c2792878fb2\") " Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.106293 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b81c7dc3-04b3-412b-94da-5c2792878fb2-config-volume" (OuterVolumeSpecName: "config-volume") pod "b81c7dc3-04b3-412b-94da-5c2792878fb2" (UID: "b81c7dc3-04b3-412b-94da-5c2792878fb2"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.110904 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b81c7dc3-04b3-412b-94da-5c2792878fb2-kube-api-access-64tmf" (OuterVolumeSpecName: "kube-api-access-64tmf") pod "b81c7dc3-04b3-412b-94da-5c2792878fb2" (UID: "b81c7dc3-04b3-412b-94da-5c2792878fb2"). InnerVolumeSpecName "kube-api-access-64tmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.111019 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b81c7dc3-04b3-412b-94da-5c2792878fb2-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b81c7dc3-04b3-412b-94da-5c2792878fb2" (UID: "b81c7dc3-04b3-412b-94da-5c2792878fb2"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.207101 4662 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b81c7dc3-04b3-412b-94da-5c2792878fb2-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.207132 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64tmf\" (UniqueName: \"kubernetes.io/projected/b81c7dc3-04b3-412b-94da-5c2792878fb2-kube-api-access-64tmf\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.207144 4662 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b81c7dc3-04b3-412b-94da-5c2792878fb2-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.594153 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" event={"ID":"b81c7dc3-04b3-412b-94da-5c2792878fb2","Type":"ContainerDied","Data":"eb62b30875fa2eabd55c0efe839977084511450d167c40ac723a523451216e3e"} Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.594189 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb62b30875fa2eabd55c0efe839977084511450d167c40ac723a523451216e3e" Dec 08 09:45:03 crc kubenswrapper[4662]: I1208 09:45:03.594223 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419785-m4lwx" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.408796 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fkxfd"] Dec 08 09:45:43 crc kubenswrapper[4662]: E1208 09:45:43.410234 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b81c7dc3-04b3-412b-94da-5c2792878fb2" containerName="collect-profiles" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.410258 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="b81c7dc3-04b3-412b-94da-5c2792878fb2" containerName="collect-profiles" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.410441 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="b81c7dc3-04b3-412b-94da-5c2792878fb2" containerName="collect-profiles" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.411730 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.417795 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkxfd"] Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.450213 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-utilities\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.450268 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-catalog-content\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.450385 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8lvq\" (UniqueName: \"kubernetes.io/projected/1a5cf49d-e068-463a-97da-f3bc35aab676-kube-api-access-n8lvq\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.551854 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-utilities\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.552128 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-catalog-content\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.552199 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8lvq\" (UniqueName: \"kubernetes.io/projected/1a5cf49d-e068-463a-97da-f3bc35aab676-kube-api-access-n8lvq\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.552686 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-utilities\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.552686 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-catalog-content\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.578830 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8lvq\" (UniqueName: \"kubernetes.io/projected/1a5cf49d-e068-463a-97da-f3bc35aab676-kube-api-access-n8lvq\") pod \"certified-operators-fkxfd\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:43 crc kubenswrapper[4662]: I1208 09:45:43.732299 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:44 crc kubenswrapper[4662]: I1208 09:45:44.242727 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fkxfd"] Dec 08 09:45:44 crc kubenswrapper[4662]: I1208 09:45:44.987226 4662 generic.go:334] "Generic (PLEG): container finished" podID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerID="a93e412059a85f0b22d0ca3a642b45a0beb09c14a9e2cbcb844bfa79ddf2f7d0" exitCode=0 Dec 08 09:45:44 crc kubenswrapper[4662]: I1208 09:45:44.987324 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkxfd" event={"ID":"1a5cf49d-e068-463a-97da-f3bc35aab676","Type":"ContainerDied","Data":"a93e412059a85f0b22d0ca3a642b45a0beb09c14a9e2cbcb844bfa79ddf2f7d0"} Dec 08 09:45:44 crc kubenswrapper[4662]: I1208 09:45:44.988528 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkxfd" event={"ID":"1a5cf49d-e068-463a-97da-f3bc35aab676","Type":"ContainerStarted","Data":"7e54d6db85f1881a09d496440a69e2cd3ae1528dd0cd018f216786ce458499d3"} Dec 08 09:45:44 crc kubenswrapper[4662]: I1208 09:45:44.992166 4662 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.008046 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cfhtk"] Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.020340 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.023931 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfhtk"] Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.086423 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-utilities\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.086509 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-catalog-content\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.086537 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjzrb\" (UniqueName: \"kubernetes.io/projected/879e4758-a24d-434d-83de-2f3c354d5d0d-kube-api-access-qjzrb\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.188120 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-catalog-content\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.188188 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjzrb\" (UniqueName: \"kubernetes.io/projected/879e4758-a24d-434d-83de-2f3c354d5d0d-kube-api-access-qjzrb\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.188407 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-utilities\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.189206 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-utilities\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.189514 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-catalog-content\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.213626 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjzrb\" (UniqueName: \"kubernetes.io/projected/879e4758-a24d-434d-83de-2f3c354d5d0d-kube-api-access-qjzrb\") pod \"community-operators-cfhtk\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.352771 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:45 crc kubenswrapper[4662]: I1208 09:45:45.894584 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cfhtk"] Dec 08 09:45:46 crc kubenswrapper[4662]: I1208 09:45:45.998126 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkxfd" event={"ID":"1a5cf49d-e068-463a-97da-f3bc35aab676","Type":"ContainerStarted","Data":"8549173fa973dd51e80e69b4466cac9694ea98390c4b31911d38a11a76c14426"} Dec 08 09:45:46 crc kubenswrapper[4662]: I1208 09:45:46.000036 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfhtk" event={"ID":"879e4758-a24d-434d-83de-2f3c354d5d0d","Type":"ContainerStarted","Data":"d3a112567f8ba719c575c7b90bbe19b772e0d8df9dab5c3207a003ceed291a09"} Dec 08 09:45:47 crc kubenswrapper[4662]: I1208 09:45:47.011472 4662 generic.go:334] "Generic (PLEG): container finished" podID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerID="8549173fa973dd51e80e69b4466cac9694ea98390c4b31911d38a11a76c14426" exitCode=0 Dec 08 09:45:47 crc kubenswrapper[4662]: I1208 09:45:47.011534 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkxfd" event={"ID":"1a5cf49d-e068-463a-97da-f3bc35aab676","Type":"ContainerDied","Data":"8549173fa973dd51e80e69b4466cac9694ea98390c4b31911d38a11a76c14426"} Dec 08 09:45:47 crc kubenswrapper[4662]: I1208 09:45:47.017274 4662 generic.go:334] "Generic (PLEG): container finished" podID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerID="ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95" exitCode=0 Dec 08 09:45:47 crc kubenswrapper[4662]: I1208 09:45:47.017321 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfhtk" event={"ID":"879e4758-a24d-434d-83de-2f3c354d5d0d","Type":"ContainerDied","Data":"ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95"} Dec 08 09:45:48 crc kubenswrapper[4662]: I1208 09:45:48.027017 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkxfd" event={"ID":"1a5cf49d-e068-463a-97da-f3bc35aab676","Type":"ContainerStarted","Data":"7837c9fdad37cd71203549ae6068f47ce669e592dd61c43352d6a9fe294302cc"} Dec 08 09:45:48 crc kubenswrapper[4662]: I1208 09:45:48.049826 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fkxfd" podStartSLOduration=2.3494576560000002 podStartE2EDuration="5.04980795s" podCreationTimestamp="2025-12-08 09:45:43 +0000 UTC" firstStartedPulling="2025-12-08 09:45:44.991699303 +0000 UTC m=+1868.560727293" lastFinishedPulling="2025-12-08 09:45:47.692049597 +0000 UTC m=+1871.261077587" observedRunningTime="2025-12-08 09:45:48.042844424 +0000 UTC m=+1871.611872414" watchObservedRunningTime="2025-12-08 09:45:48.04980795 +0000 UTC m=+1871.618835940" Dec 08 09:45:49 crc kubenswrapper[4662]: I1208 09:45:49.036946 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfhtk" event={"ID":"879e4758-a24d-434d-83de-2f3c354d5d0d","Type":"ContainerStarted","Data":"0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514"} Dec 08 09:45:50 crc kubenswrapper[4662]: I1208 09:45:50.047578 4662 generic.go:334] "Generic (PLEG): container finished" podID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerID="0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514" exitCode=0 Dec 08 09:45:50 crc kubenswrapper[4662]: I1208 09:45:50.047625 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfhtk" event={"ID":"879e4758-a24d-434d-83de-2f3c354d5d0d","Type":"ContainerDied","Data":"0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514"} Dec 08 09:45:51 crc kubenswrapper[4662]: I1208 09:45:51.063985 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfhtk" event={"ID":"879e4758-a24d-434d-83de-2f3c354d5d0d","Type":"ContainerStarted","Data":"8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603"} Dec 08 09:45:51 crc kubenswrapper[4662]: I1208 09:45:51.091501 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cfhtk" podStartSLOduration=3.339862342 podStartE2EDuration="7.091482317s" podCreationTimestamp="2025-12-08 09:45:44 +0000 UTC" firstStartedPulling="2025-12-08 09:45:47.019331711 +0000 UTC m=+1870.588359711" lastFinishedPulling="2025-12-08 09:45:50.770951686 +0000 UTC m=+1874.339979686" observedRunningTime="2025-12-08 09:45:51.084003588 +0000 UTC m=+1874.653031578" watchObservedRunningTime="2025-12-08 09:45:51.091482317 +0000 UTC m=+1874.660510297" Dec 08 09:45:53 crc kubenswrapper[4662]: I1208 09:45:53.733012 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:53 crc kubenswrapper[4662]: I1208 09:45:53.733374 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:53 crc kubenswrapper[4662]: I1208 09:45:53.784360 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:54 crc kubenswrapper[4662]: I1208 09:45:54.130550 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:54 crc kubenswrapper[4662]: I1208 09:45:54.984196 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkxfd"] Dec 08 09:45:55 crc kubenswrapper[4662]: I1208 09:45:55.353529 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:55 crc kubenswrapper[4662]: I1208 09:45:55.353849 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:55 crc kubenswrapper[4662]: I1208 09:45:55.410539 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:56 crc kubenswrapper[4662]: I1208 09:45:56.105440 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fkxfd" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="registry-server" containerID="cri-o://7837c9fdad37cd71203549ae6068f47ce669e592dd61c43352d6a9fe294302cc" gracePeriod=2 Dec 08 09:45:56 crc kubenswrapper[4662]: I1208 09:45:56.165520 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.119050 4662 generic.go:334] "Generic (PLEG): container finished" podID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerID="7837c9fdad37cd71203549ae6068f47ce669e592dd61c43352d6a9fe294302cc" exitCode=0 Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.119125 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkxfd" event={"ID":"1a5cf49d-e068-463a-97da-f3bc35aab676","Type":"ContainerDied","Data":"7837c9fdad37cd71203549ae6068f47ce669e592dd61c43352d6a9fe294302cc"} Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.391196 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cfhtk"] Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.692875 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.732144 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-utilities\") pod \"1a5cf49d-e068-463a-97da-f3bc35aab676\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.732297 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-catalog-content\") pod \"1a5cf49d-e068-463a-97da-f3bc35aab676\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.732342 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8lvq\" (UniqueName: \"kubernetes.io/projected/1a5cf49d-e068-463a-97da-f3bc35aab676-kube-api-access-n8lvq\") pod \"1a5cf49d-e068-463a-97da-f3bc35aab676\" (UID: \"1a5cf49d-e068-463a-97da-f3bc35aab676\") " Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.733663 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-utilities" (OuterVolumeSpecName: "utilities") pod "1a5cf49d-e068-463a-97da-f3bc35aab676" (UID: "1a5cf49d-e068-463a-97da-f3bc35aab676"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.738854 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a5cf49d-e068-463a-97da-f3bc35aab676-kube-api-access-n8lvq" (OuterVolumeSpecName: "kube-api-access-n8lvq") pod "1a5cf49d-e068-463a-97da-f3bc35aab676" (UID: "1a5cf49d-e068-463a-97da-f3bc35aab676"). InnerVolumeSpecName "kube-api-access-n8lvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.793903 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a5cf49d-e068-463a-97da-f3bc35aab676" (UID: "1a5cf49d-e068-463a-97da-f3bc35aab676"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.834341 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.834385 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8lvq\" (UniqueName: \"kubernetes.io/projected/1a5cf49d-e068-463a-97da-f3bc35aab676-kube-api-access-n8lvq\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:57 crc kubenswrapper[4662]: I1208 09:45:57.834396 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a5cf49d-e068-463a-97da-f3bc35aab676-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.131290 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fkxfd" event={"ID":"1a5cf49d-e068-463a-97da-f3bc35aab676","Type":"ContainerDied","Data":"7e54d6db85f1881a09d496440a69e2cd3ae1528dd0cd018f216786ce458499d3"} Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.131329 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fkxfd" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.131357 4662 scope.go:117] "RemoveContainer" containerID="7837c9fdad37cd71203549ae6068f47ce669e592dd61c43352d6a9fe294302cc" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.133065 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cfhtk" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="registry-server" containerID="cri-o://8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603" gracePeriod=2 Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.179893 4662 scope.go:117] "RemoveContainer" containerID="8549173fa973dd51e80e69b4466cac9694ea98390c4b31911d38a11a76c14426" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.189591 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fkxfd"] Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.205306 4662 scope.go:117] "RemoveContainer" containerID="a93e412059a85f0b22d0ca3a642b45a0beb09c14a9e2cbcb844bfa79ddf2f7d0" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.206253 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fkxfd"] Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.578922 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.582210 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjzrb\" (UniqueName: \"kubernetes.io/projected/879e4758-a24d-434d-83de-2f3c354d5d0d-kube-api-access-qjzrb\") pod \"879e4758-a24d-434d-83de-2f3c354d5d0d\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.587139 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/879e4758-a24d-434d-83de-2f3c354d5d0d-kube-api-access-qjzrb" (OuterVolumeSpecName: "kube-api-access-qjzrb") pod "879e4758-a24d-434d-83de-2f3c354d5d0d" (UID: "879e4758-a24d-434d-83de-2f3c354d5d0d"). InnerVolumeSpecName "kube-api-access-qjzrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.685451 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-catalog-content\") pod \"879e4758-a24d-434d-83de-2f3c354d5d0d\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.685850 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-utilities\") pod \"879e4758-a24d-434d-83de-2f3c354d5d0d\" (UID: \"879e4758-a24d-434d-83de-2f3c354d5d0d\") " Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.690753 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjzrb\" (UniqueName: \"kubernetes.io/projected/879e4758-a24d-434d-83de-2f3c354d5d0d-kube-api-access-qjzrb\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.691617 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-utilities" (OuterVolumeSpecName: "utilities") pod "879e4758-a24d-434d-83de-2f3c354d5d0d" (UID: "879e4758-a24d-434d-83de-2f3c354d5d0d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.708250 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" path="/var/lib/kubelet/pods/1a5cf49d-e068-463a-97da-f3bc35aab676/volumes" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.741132 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "879e4758-a24d-434d-83de-2f3c354d5d0d" (UID: "879e4758-a24d-434d-83de-2f3c354d5d0d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.792153 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:58 crc kubenswrapper[4662]: I1208 09:45:58.792190 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/879e4758-a24d-434d-83de-2f3c354d5d0d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.147565 4662 generic.go:334] "Generic (PLEG): container finished" podID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerID="8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603" exitCode=0 Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.147616 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfhtk" event={"ID":"879e4758-a24d-434d-83de-2f3c354d5d0d","Type":"ContainerDied","Data":"8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603"} Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.147645 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cfhtk" event={"ID":"879e4758-a24d-434d-83de-2f3c354d5d0d","Type":"ContainerDied","Data":"d3a112567f8ba719c575c7b90bbe19b772e0d8df9dab5c3207a003ceed291a09"} Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.147644 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cfhtk" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.147666 4662 scope.go:117] "RemoveContainer" containerID="8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.183631 4662 scope.go:117] "RemoveContainer" containerID="0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.216582 4662 scope.go:117] "RemoveContainer" containerID="ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.225818 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cfhtk"] Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.235002 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cfhtk"] Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.264406 4662 scope.go:117] "RemoveContainer" containerID="8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603" Dec 08 09:45:59 crc kubenswrapper[4662]: E1208 09:45:59.264886 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603\": container with ID starting with 8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603 not found: ID does not exist" containerID="8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.264929 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603"} err="failed to get container status \"8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603\": rpc error: code = NotFound desc = could not find container \"8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603\": container with ID starting with 8843fc064b5060be654bda7fe100af60f1b0e153edc887639d6268b65a4cf603 not found: ID does not exist" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.264957 4662 scope.go:117] "RemoveContainer" containerID="0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514" Dec 08 09:45:59 crc kubenswrapper[4662]: E1208 09:45:59.265299 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514\": container with ID starting with 0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514 not found: ID does not exist" containerID="0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.265357 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514"} err="failed to get container status \"0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514\": rpc error: code = NotFound desc = could not find container \"0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514\": container with ID starting with 0db28a5e024a32ed7b3631049f8d3e9acc66c5a220162151acb7561b41ead514 not found: ID does not exist" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.265392 4662 scope.go:117] "RemoveContainer" containerID="ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95" Dec 08 09:45:59 crc kubenswrapper[4662]: E1208 09:45:59.265729 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95\": container with ID starting with ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95 not found: ID does not exist" containerID="ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95" Dec 08 09:45:59 crc kubenswrapper[4662]: I1208 09:45:59.265866 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95"} err="failed to get container status \"ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95\": rpc error: code = NotFound desc = could not find container \"ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95\": container with ID starting with ddb9a3f8a5f7fd81207dac97d3cbf25b321f8754a001add6bab37c917573de95 not found: ID does not exist" Dec 08 09:46:00 crc kubenswrapper[4662]: I1208 09:46:00.714095 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" path="/var/lib/kubelet/pods/879e4758-a24d-434d-83de-2f3c354d5d0d/volumes" Dec 08 09:46:32 crc kubenswrapper[4662]: I1208 09:46:32.611681 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:46:32 crc kubenswrapper[4662]: I1208 09:46:32.612313 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:46:49 crc kubenswrapper[4662]: I1208 09:46:49.300425 4662 scope.go:117] "RemoveContainer" containerID="ee4dfddd6767a1ff1b032229b590c64c4eb94086daf9ed08e1a804e3cb9a45cb" Dec 08 09:46:49 crc kubenswrapper[4662]: I1208 09:46:49.324775 4662 scope.go:117] "RemoveContainer" containerID="7b34eb396211746192e2e65e72daa5dda93ee026d3516fdc50d5e64f1612ee52" Dec 08 09:46:49 crc kubenswrapper[4662]: I1208 09:46:49.366222 4662 scope.go:117] "RemoveContainer" containerID="227067d874c6d46b6f16d36312a6904f2531621e93f69c2a488e4e942f9565d2" Dec 08 09:47:02 crc kubenswrapper[4662]: I1208 09:47:02.611864 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:47:02 crc kubenswrapper[4662]: I1208 09:47:02.612223 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:47:32 crc kubenswrapper[4662]: I1208 09:47:32.611376 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:47:32 crc kubenswrapper[4662]: I1208 09:47:32.613102 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:47:32 crc kubenswrapper[4662]: I1208 09:47:32.613411 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:47:32 crc kubenswrapper[4662]: I1208 09:47:32.614246 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e6c4becf03c0eb64cc76451dcb0b9d5535374c59a72e6a7c92ee83afc741d04"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:47:32 crc kubenswrapper[4662]: I1208 09:47:32.614420 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://1e6c4becf03c0eb64cc76451dcb0b9d5535374c59a72e6a7c92ee83afc741d04" gracePeriod=600 Dec 08 09:47:33 crc kubenswrapper[4662]: I1208 09:47:33.200599 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="1e6c4becf03c0eb64cc76451dcb0b9d5535374c59a72e6a7c92ee83afc741d04" exitCode=0 Dec 08 09:47:33 crc kubenswrapper[4662]: I1208 09:47:33.200693 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"1e6c4becf03c0eb64cc76451dcb0b9d5535374c59a72e6a7c92ee83afc741d04"} Dec 08 09:47:33 crc kubenswrapper[4662]: I1208 09:47:33.201031 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a"} Dec 08 09:47:33 crc kubenswrapper[4662]: I1208 09:47:33.201063 4662 scope.go:117] "RemoveContainer" containerID="e843e206e076eb9b6e86543a45dc5a5d4b98617aadac67fb94318202ec55d88e" Dec 08 09:49:32 crc kubenswrapper[4662]: I1208 09:49:32.611037 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:49:32 crc kubenswrapper[4662]: I1208 09:49:32.611547 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:50:02 crc kubenswrapper[4662]: I1208 09:50:02.611207 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:50:02 crc kubenswrapper[4662]: I1208 09:50:02.611771 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.449618 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4x86p"] Dec 08 09:50:06 crc kubenswrapper[4662]: E1208 09:50:06.450512 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="extract-utilities" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450528 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="extract-utilities" Dec 08 09:50:06 crc kubenswrapper[4662]: E1208 09:50:06.450548 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="registry-server" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450556 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="registry-server" Dec 08 09:50:06 crc kubenswrapper[4662]: E1208 09:50:06.450577 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="extract-content" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450585 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="extract-content" Dec 08 09:50:06 crc kubenswrapper[4662]: E1208 09:50:06.450607 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="extract-utilities" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450615 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="extract-utilities" Dec 08 09:50:06 crc kubenswrapper[4662]: E1208 09:50:06.450634 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="registry-server" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450641 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="registry-server" Dec 08 09:50:06 crc kubenswrapper[4662]: E1208 09:50:06.450659 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="extract-content" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450665 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="extract-content" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450843 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a5cf49d-e068-463a-97da-f3bc35aab676" containerName="registry-server" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.450870 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="879e4758-a24d-434d-83de-2f3c354d5d0d" containerName="registry-server" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.452103 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.476777 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4x86p"] Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.581789 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-utilities\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.582129 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-catalog-content\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.582181 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7mhw\" (UniqueName: \"kubernetes.io/projected/7f5b857b-d180-4036-89ae-7d01f297b957-kube-api-access-h7mhw\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.684339 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-catalog-content\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.684446 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7mhw\" (UniqueName: \"kubernetes.io/projected/7f5b857b-d180-4036-89ae-7d01f297b957-kube-api-access-h7mhw\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.684560 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-utilities\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.685328 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-utilities\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.685417 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-catalog-content\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.704827 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7mhw\" (UniqueName: \"kubernetes.io/projected/7f5b857b-d180-4036-89ae-7d01f297b957-kube-api-access-h7mhw\") pod \"redhat-operators-4x86p\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:06 crc kubenswrapper[4662]: I1208 09:50:06.769207 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:07 crc kubenswrapper[4662]: I1208 09:50:07.271149 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4x86p"] Dec 08 09:50:07 crc kubenswrapper[4662]: I1208 09:50:07.519977 4662 generic.go:334] "Generic (PLEG): container finished" podID="7f5b857b-d180-4036-89ae-7d01f297b957" containerID="d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55" exitCode=0 Dec 08 09:50:07 crc kubenswrapper[4662]: I1208 09:50:07.520074 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x86p" event={"ID":"7f5b857b-d180-4036-89ae-7d01f297b957","Type":"ContainerDied","Data":"d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55"} Dec 08 09:50:07 crc kubenswrapper[4662]: I1208 09:50:07.520362 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x86p" event={"ID":"7f5b857b-d180-4036-89ae-7d01f297b957","Type":"ContainerStarted","Data":"53123c3d29cd508ee6402ae8bb2c08219246ece9e484157ef3520daf5b5e5a58"} Dec 08 09:50:08 crc kubenswrapper[4662]: I1208 09:50:08.528833 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x86p" event={"ID":"7f5b857b-d180-4036-89ae-7d01f297b957","Type":"ContainerStarted","Data":"752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f"} Dec 08 09:50:11 crc kubenswrapper[4662]: I1208 09:50:11.554188 4662 generic.go:334] "Generic (PLEG): container finished" podID="7f5b857b-d180-4036-89ae-7d01f297b957" containerID="752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f" exitCode=0 Dec 08 09:50:11 crc kubenswrapper[4662]: I1208 09:50:11.554399 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x86p" event={"ID":"7f5b857b-d180-4036-89ae-7d01f297b957","Type":"ContainerDied","Data":"752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f"} Dec 08 09:50:12 crc kubenswrapper[4662]: I1208 09:50:12.566434 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x86p" event={"ID":"7f5b857b-d180-4036-89ae-7d01f297b957","Type":"ContainerStarted","Data":"333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878"} Dec 08 09:50:12 crc kubenswrapper[4662]: I1208 09:50:12.597882 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4x86p" podStartSLOduration=2.168338583 podStartE2EDuration="6.597858238s" podCreationTimestamp="2025-12-08 09:50:06 +0000 UTC" firstStartedPulling="2025-12-08 09:50:07.521630616 +0000 UTC m=+2131.090658596" lastFinishedPulling="2025-12-08 09:50:11.951150261 +0000 UTC m=+2135.520178251" observedRunningTime="2025-12-08 09:50:12.585296242 +0000 UTC m=+2136.154324252" watchObservedRunningTime="2025-12-08 09:50:12.597858238 +0000 UTC m=+2136.166886248" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.422651 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdkk"] Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.425457 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.432122 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdkk"] Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.522336 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9hbl\" (UniqueName: \"kubernetes.io/projected/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-kube-api-access-l9hbl\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.522441 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-catalog-content\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.522479 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-utilities\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.624177 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9hbl\" (UniqueName: \"kubernetes.io/projected/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-kube-api-access-l9hbl\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.624259 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-catalog-content\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.624278 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-utilities\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.624887 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-catalog-content\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.624902 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-utilities\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.648615 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9hbl\" (UniqueName: \"kubernetes.io/projected/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-kube-api-access-l9hbl\") pod \"redhat-marketplace-9jdkk\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:13 crc kubenswrapper[4662]: I1208 09:50:13.757037 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.235530 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qdmzh/must-gather-fqvzr"] Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.238087 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.242308 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qdmzh"/"kube-root-ca.crt" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.242969 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-qdmzh"/"openshift-service-ca.crt" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.272786 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qdmzh/must-gather-fqvzr"] Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.338860 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5n66\" (UniqueName: \"kubernetes.io/projected/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-kube-api-access-x5n66\") pod \"must-gather-fqvzr\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.338957 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-must-gather-output\") pod \"must-gather-fqvzr\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.433986 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdkk"] Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.440078 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-must-gather-output\") pod \"must-gather-fqvzr\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.440217 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5n66\" (UniqueName: \"kubernetes.io/projected/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-kube-api-access-x5n66\") pod \"must-gather-fqvzr\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.440897 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-must-gather-output\") pod \"must-gather-fqvzr\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.467180 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5n66\" (UniqueName: \"kubernetes.io/projected/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-kube-api-access-x5n66\") pod \"must-gather-fqvzr\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.574784 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:50:14 crc kubenswrapper[4662]: I1208 09:50:14.582284 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdkk" event={"ID":"3a0f15f6-c6e6-4e64-90b0-edc94f90882b","Type":"ContainerStarted","Data":"6a6be260712d239fa17bb939ec4fc917da65f7138b124a7e1b1e545d692b8ae4"} Dec 08 09:50:15 crc kubenswrapper[4662]: I1208 09:50:15.189947 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-qdmzh/must-gather-fqvzr"] Dec 08 09:50:15 crc kubenswrapper[4662]: I1208 09:50:15.593625 4662 generic.go:334] "Generic (PLEG): container finished" podID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerID="fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77" exitCode=0 Dec 08 09:50:15 crc kubenswrapper[4662]: I1208 09:50:15.593894 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdkk" event={"ID":"3a0f15f6-c6e6-4e64-90b0-edc94f90882b","Type":"ContainerDied","Data":"fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77"} Dec 08 09:50:15 crc kubenswrapper[4662]: I1208 09:50:15.609954 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" event={"ID":"ef624d5b-6078-4049-b3a4-3e9cbbe2730f","Type":"ContainerStarted","Data":"88201a7e65dff8a802f38d7dc7ed2e9bc0287e0267d63b8a47e2c533ceec2c14"} Dec 08 09:50:16 crc kubenswrapper[4662]: I1208 09:50:16.627137 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdkk" event={"ID":"3a0f15f6-c6e6-4e64-90b0-edc94f90882b","Type":"ContainerStarted","Data":"f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a"} Dec 08 09:50:16 crc kubenswrapper[4662]: I1208 09:50:16.770088 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:16 crc kubenswrapper[4662]: I1208 09:50:16.770128 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:17 crc kubenswrapper[4662]: I1208 09:50:17.639851 4662 generic.go:334] "Generic (PLEG): container finished" podID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerID="f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a" exitCode=0 Dec 08 09:50:17 crc kubenswrapper[4662]: I1208 09:50:17.639913 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdkk" event={"ID":"3a0f15f6-c6e6-4e64-90b0-edc94f90882b","Type":"ContainerDied","Data":"f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a"} Dec 08 09:50:17 crc kubenswrapper[4662]: I1208 09:50:17.828231 4662 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4x86p" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="registry-server" probeResult="failure" output=< Dec 08 09:50:17 crc kubenswrapper[4662]: timeout: failed to connect service ":50051" within 1s Dec 08 09:50:17 crc kubenswrapper[4662]: > Dec 08 09:50:18 crc kubenswrapper[4662]: I1208 09:50:18.661343 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdkk" event={"ID":"3a0f15f6-c6e6-4e64-90b0-edc94f90882b","Type":"ContainerStarted","Data":"dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f"} Dec 08 09:50:18 crc kubenswrapper[4662]: I1208 09:50:18.689229 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9jdkk" podStartSLOduration=3.166765279 podStartE2EDuration="5.68920981s" podCreationTimestamp="2025-12-08 09:50:13 +0000 UTC" firstStartedPulling="2025-12-08 09:50:15.595456014 +0000 UTC m=+2139.164484004" lastFinishedPulling="2025-12-08 09:50:18.117900535 +0000 UTC m=+2141.686928535" observedRunningTime="2025-12-08 09:50:18.678974205 +0000 UTC m=+2142.248002205" watchObservedRunningTime="2025-12-08 09:50:18.68920981 +0000 UTC m=+2142.258237800" Dec 08 09:50:23 crc kubenswrapper[4662]: I1208 09:50:23.758229 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:23 crc kubenswrapper[4662]: I1208 09:50:23.758554 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:23 crc kubenswrapper[4662]: I1208 09:50:23.811704 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:24 crc kubenswrapper[4662]: I1208 09:50:24.778279 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:24 crc kubenswrapper[4662]: I1208 09:50:24.823492 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdkk"] Dec 08 09:50:25 crc kubenswrapper[4662]: I1208 09:50:25.732162 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" event={"ID":"ef624d5b-6078-4049-b3a4-3e9cbbe2730f","Type":"ContainerStarted","Data":"8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71"} Dec 08 09:50:26 crc kubenswrapper[4662]: I1208 09:50:26.744189 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" event={"ID":"ef624d5b-6078-4049-b3a4-3e9cbbe2730f","Type":"ContainerStarted","Data":"c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7"} Dec 08 09:50:26 crc kubenswrapper[4662]: I1208 09:50:26.744650 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9jdkk" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="registry-server" containerID="cri-o://dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f" gracePeriod=2 Dec 08 09:50:26 crc kubenswrapper[4662]: I1208 09:50:26.759380 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" podStartSLOduration=2.608235371 podStartE2EDuration="12.75936083s" podCreationTimestamp="2025-12-08 09:50:14 +0000 UTC" firstStartedPulling="2025-12-08 09:50:15.228314056 +0000 UTC m=+2138.797342036" lastFinishedPulling="2025-12-08 09:50:25.379439505 +0000 UTC m=+2148.948467495" observedRunningTime="2025-12-08 09:50:26.757564292 +0000 UTC m=+2150.326592292" watchObservedRunningTime="2025-12-08 09:50:26.75936083 +0000 UTC m=+2150.328388830" Dec 08 09:50:26 crc kubenswrapper[4662]: I1208 09:50:26.817668 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:26 crc kubenswrapper[4662]: I1208 09:50:26.865025 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.168999 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.323610 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-catalog-content\") pod \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.323750 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9hbl\" (UniqueName: \"kubernetes.io/projected/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-kube-api-access-l9hbl\") pod \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.323803 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-utilities\") pod \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\" (UID: \"3a0f15f6-c6e6-4e64-90b0-edc94f90882b\") " Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.324439 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-utilities" (OuterVolumeSpecName: "utilities") pod "3a0f15f6-c6e6-4e64-90b0-edc94f90882b" (UID: "3a0f15f6-c6e6-4e64-90b0-edc94f90882b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.338010 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-kube-api-access-l9hbl" (OuterVolumeSpecName: "kube-api-access-l9hbl") pod "3a0f15f6-c6e6-4e64-90b0-edc94f90882b" (UID: "3a0f15f6-c6e6-4e64-90b0-edc94f90882b"). InnerVolumeSpecName "kube-api-access-l9hbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.364174 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a0f15f6-c6e6-4e64-90b0-edc94f90882b" (UID: "3a0f15f6-c6e6-4e64-90b0-edc94f90882b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.426428 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.426471 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9hbl\" (UniqueName: \"kubernetes.io/projected/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-kube-api-access-l9hbl\") on node \"crc\" DevicePath \"\"" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.426485 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a0f15f6-c6e6-4e64-90b0-edc94f90882b-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.658884 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4x86p"] Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.752943 4662 generic.go:334] "Generic (PLEG): container finished" podID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerID="dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f" exitCode=0 Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.753865 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9jdkk" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.760243 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdkk" event={"ID":"3a0f15f6-c6e6-4e64-90b0-edc94f90882b","Type":"ContainerDied","Data":"dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f"} Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.760372 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9jdkk" event={"ID":"3a0f15f6-c6e6-4e64-90b0-edc94f90882b","Type":"ContainerDied","Data":"6a6be260712d239fa17bb939ec4fc917da65f7138b124a7e1b1e545d692b8ae4"} Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.760397 4662 scope.go:117] "RemoveContainer" containerID="dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.792032 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdkk"] Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.802178 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9jdkk"] Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.820045 4662 scope.go:117] "RemoveContainer" containerID="f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.896031 4662 scope.go:117] "RemoveContainer" containerID="fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.946127 4662 scope.go:117] "RemoveContainer" containerID="dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f" Dec 08 09:50:27 crc kubenswrapper[4662]: E1208 09:50:27.946953 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f\": container with ID starting with dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f not found: ID does not exist" containerID="dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.946983 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f"} err="failed to get container status \"dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f\": rpc error: code = NotFound desc = could not find container \"dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f\": container with ID starting with dc7ec8c0b8de5367706f4e5b3005b8b4d002f02daf6f1d5f4bf19f3524765a7f not found: ID does not exist" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.947004 4662 scope.go:117] "RemoveContainer" containerID="f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a" Dec 08 09:50:27 crc kubenswrapper[4662]: E1208 09:50:27.947303 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a\": container with ID starting with f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a not found: ID does not exist" containerID="f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.947325 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a"} err="failed to get container status \"f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a\": rpc error: code = NotFound desc = could not find container \"f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a\": container with ID starting with f7cc612fc3ebc810b486f515ed4d9815bba18bfec79de6f18993771dbdc71d1a not found: ID does not exist" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.947337 4662 scope.go:117] "RemoveContainer" containerID="fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77" Dec 08 09:50:27 crc kubenswrapper[4662]: E1208 09:50:27.947668 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77\": container with ID starting with fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77 not found: ID does not exist" containerID="fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77" Dec 08 09:50:27 crc kubenswrapper[4662]: I1208 09:50:27.947707 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77"} err="failed to get container status \"fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77\": rpc error: code = NotFound desc = could not find container \"fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77\": container with ID starting with fe95ddfd790ec34fb475f5a0c213c95e2389838743e56136100c9527e61e5d77 not found: ID does not exist" Dec 08 09:50:28 crc kubenswrapper[4662]: I1208 09:50:28.708893 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" path="/var/lib/kubelet/pods/3a0f15f6-c6e6-4e64-90b0-edc94f90882b/volumes" Dec 08 09:50:28 crc kubenswrapper[4662]: I1208 09:50:28.762621 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4x86p" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="registry-server" containerID="cri-o://333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878" gracePeriod=2 Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.262000 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.404562 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7mhw\" (UniqueName: \"kubernetes.io/projected/7f5b857b-d180-4036-89ae-7d01f297b957-kube-api-access-h7mhw\") pod \"7f5b857b-d180-4036-89ae-7d01f297b957\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.404854 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-utilities\") pod \"7f5b857b-d180-4036-89ae-7d01f297b957\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.404897 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-catalog-content\") pod \"7f5b857b-d180-4036-89ae-7d01f297b957\" (UID: \"7f5b857b-d180-4036-89ae-7d01f297b957\") " Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.405359 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-utilities" (OuterVolumeSpecName: "utilities") pod "7f5b857b-d180-4036-89ae-7d01f297b957" (UID: "7f5b857b-d180-4036-89ae-7d01f297b957"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.423917 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f5b857b-d180-4036-89ae-7d01f297b957-kube-api-access-h7mhw" (OuterVolumeSpecName: "kube-api-access-h7mhw") pod "7f5b857b-d180-4036-89ae-7d01f297b957" (UID: "7f5b857b-d180-4036-89ae-7d01f297b957"). InnerVolumeSpecName "kube-api-access-h7mhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504114 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qdmzh/crc-debug-bbktq"] Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.504533 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="registry-server" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504546 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="registry-server" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.504567 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="extract-utilities" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504573 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="extract-utilities" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.504584 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="extract-utilities" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504592 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="extract-utilities" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.504608 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="extract-content" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504614 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="extract-content" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.504623 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="extract-content" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504629 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="extract-content" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.504648 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="registry-server" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504654 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="registry-server" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504891 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a0f15f6-c6e6-4e64-90b0-edc94f90882b" containerName="registry-server" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.504908 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" containerName="registry-server" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.505550 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.507023 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.507042 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7mhw\" (UniqueName: \"kubernetes.io/projected/7f5b857b-d180-4036-89ae-7d01f297b957-kube-api-access-h7mhw\") on node \"crc\" DevicePath \"\"" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.508129 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qdmzh"/"default-dockercfg-s7fgw" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.510169 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7f5b857b-d180-4036-89ae-7d01f297b957" (UID: "7f5b857b-d180-4036-89ae-7d01f297b957"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.609011 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5wn2\" (UniqueName: \"kubernetes.io/projected/4c3d9411-7fc2-408f-9197-2e4808f45735-kube-api-access-r5wn2\") pod \"crc-debug-bbktq\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.609158 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c3d9411-7fc2-408f-9197-2e4808f45735-host\") pod \"crc-debug-bbktq\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.609259 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7f5b857b-d180-4036-89ae-7d01f297b957-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.710572 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5wn2\" (UniqueName: \"kubernetes.io/projected/4c3d9411-7fc2-408f-9197-2e4808f45735-kube-api-access-r5wn2\") pod \"crc-debug-bbktq\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.710873 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c3d9411-7fc2-408f-9197-2e4808f45735-host\") pod \"crc-debug-bbktq\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.710975 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c3d9411-7fc2-408f-9197-2e4808f45735-host\") pod \"crc-debug-bbktq\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.732231 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5wn2\" (UniqueName: \"kubernetes.io/projected/4c3d9411-7fc2-408f-9197-2e4808f45735-kube-api-access-r5wn2\") pod \"crc-debug-bbktq\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.772084 4662 generic.go:334] "Generic (PLEG): container finished" podID="7f5b857b-d180-4036-89ae-7d01f297b957" containerID="333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878" exitCode=0 Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.772149 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x86p" event={"ID":"7f5b857b-d180-4036-89ae-7d01f297b957","Type":"ContainerDied","Data":"333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878"} Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.772189 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4x86p" event={"ID":"7f5b857b-d180-4036-89ae-7d01f297b957","Type":"ContainerDied","Data":"53123c3d29cd508ee6402ae8bb2c08219246ece9e484157ef3520daf5b5e5a58"} Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.772211 4662 scope.go:117] "RemoveContainer" containerID="333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.772357 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4x86p" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.820658 4662 scope.go:117] "RemoveContainer" containerID="752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.823059 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4x86p"] Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.823417 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.834451 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4x86p"] Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.860996 4662 scope.go:117] "RemoveContainer" containerID="d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.904029 4662 scope.go:117] "RemoveContainer" containerID="333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.906862 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878\": container with ID starting with 333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878 not found: ID does not exist" containerID="333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.906901 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878"} err="failed to get container status \"333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878\": rpc error: code = NotFound desc = could not find container \"333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878\": container with ID starting with 333dfb156fadf9d1bb8cd4f2852f2f89fe805a331bf2783d984811c801064878 not found: ID does not exist" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.906927 4662 scope.go:117] "RemoveContainer" containerID="752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.907647 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f\": container with ID starting with 752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f not found: ID does not exist" containerID="752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.907678 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f"} err="failed to get container status \"752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f\": rpc error: code = NotFound desc = could not find container \"752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f\": container with ID starting with 752273cd14e9b02e8d7adc3afbe2c8052ec8d0f7550d1f6e18bbd4df4ab3027f not found: ID does not exist" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.907700 4662 scope.go:117] "RemoveContainer" containerID="d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55" Dec 08 09:50:29 crc kubenswrapper[4662]: E1208 09:50:29.908040 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55\": container with ID starting with d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55 not found: ID does not exist" containerID="d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55" Dec 08 09:50:29 crc kubenswrapper[4662]: I1208 09:50:29.908068 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55"} err="failed to get container status \"d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55\": rpc error: code = NotFound desc = could not find container \"d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55\": container with ID starting with d623605c57a1ea6c2e78e165e613e9bfa1e63f34e3160679242893a619f28e55 not found: ID does not exist" Dec 08 09:50:30 crc kubenswrapper[4662]: I1208 09:50:30.708623 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7f5b857b-d180-4036-89ae-7d01f297b957" path="/var/lib/kubelet/pods/7f5b857b-d180-4036-89ae-7d01f297b957/volumes" Dec 08 09:50:30 crc kubenswrapper[4662]: I1208 09:50:30.786531 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" event={"ID":"4c3d9411-7fc2-408f-9197-2e4808f45735","Type":"ContainerStarted","Data":"9c88d80ede43711f13cea0ca65894aac23b0a965541023031b2e5cca1b8c148e"} Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.611270 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.611766 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.611814 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.612596 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.612648 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" gracePeriod=600 Dec 08 09:50:32 crc kubenswrapper[4662]: E1208 09:50:32.746392 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.809494 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" exitCode=0 Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.809540 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a"} Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.809577 4662 scope.go:117] "RemoveContainer" containerID="1e6c4becf03c0eb64cc76451dcb0b9d5535374c59a72e6a7c92ee83afc741d04" Dec 08 09:50:32 crc kubenswrapper[4662]: I1208 09:50:32.810431 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:50:32 crc kubenswrapper[4662]: E1208 09:50:32.810811 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:50:43 crc kubenswrapper[4662]: I1208 09:50:43.914977 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" event={"ID":"4c3d9411-7fc2-408f-9197-2e4808f45735","Type":"ContainerStarted","Data":"f22568040e8e14b308b62aec9d8139a6cd4a2c81c62042f88475986b47acd8d6"} Dec 08 09:50:43 crc kubenswrapper[4662]: I1208 09:50:43.936200 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" podStartSLOduration=1.847544719 podStartE2EDuration="14.936182152s" podCreationTimestamp="2025-12-08 09:50:29 +0000 UTC" firstStartedPulling="2025-12-08 09:50:29.876274986 +0000 UTC m=+2153.445302976" lastFinishedPulling="2025-12-08 09:50:42.964912419 +0000 UTC m=+2166.533940409" observedRunningTime="2025-12-08 09:50:43.931426325 +0000 UTC m=+2167.500454315" watchObservedRunningTime="2025-12-08 09:50:43.936182152 +0000 UTC m=+2167.505210142" Dec 08 09:50:46 crc kubenswrapper[4662]: I1208 09:50:46.702439 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:50:46 crc kubenswrapper[4662]: E1208 09:50:46.704308 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:50:59 crc kubenswrapper[4662]: I1208 09:50:59.697732 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:50:59 crc kubenswrapper[4662]: E1208 09:50:59.698440 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:51:03 crc kubenswrapper[4662]: I1208 09:51:03.074756 4662 generic.go:334] "Generic (PLEG): container finished" podID="4c3d9411-7fc2-408f-9197-2e4808f45735" containerID="f22568040e8e14b308b62aec9d8139a6cd4a2c81c62042f88475986b47acd8d6" exitCode=0 Dec 08 09:51:03 crc kubenswrapper[4662]: I1208 09:51:03.074946 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" event={"ID":"4c3d9411-7fc2-408f-9197-2e4808f45735","Type":"ContainerDied","Data":"f22568040e8e14b308b62aec9d8139a6cd4a2c81c62042f88475986b47acd8d6"} Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.185553 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.222550 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qdmzh/crc-debug-bbktq"] Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.231525 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qdmzh/crc-debug-bbktq"] Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.377094 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5wn2\" (UniqueName: \"kubernetes.io/projected/4c3d9411-7fc2-408f-9197-2e4808f45735-kube-api-access-r5wn2\") pod \"4c3d9411-7fc2-408f-9197-2e4808f45735\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.377317 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c3d9411-7fc2-408f-9197-2e4808f45735-host\") pod \"4c3d9411-7fc2-408f-9197-2e4808f45735\" (UID: \"4c3d9411-7fc2-408f-9197-2e4808f45735\") " Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.377396 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c3d9411-7fc2-408f-9197-2e4808f45735-host" (OuterVolumeSpecName: "host") pod "4c3d9411-7fc2-408f-9197-2e4808f45735" (UID: "4c3d9411-7fc2-408f-9197-2e4808f45735"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.377644 4662 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/4c3d9411-7fc2-408f-9197-2e4808f45735-host\") on node \"crc\" DevicePath \"\"" Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.389969 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c3d9411-7fc2-408f-9197-2e4808f45735-kube-api-access-r5wn2" (OuterVolumeSpecName: "kube-api-access-r5wn2") pod "4c3d9411-7fc2-408f-9197-2e4808f45735" (UID: "4c3d9411-7fc2-408f-9197-2e4808f45735"). InnerVolumeSpecName "kube-api-access-r5wn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.479689 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5wn2\" (UniqueName: \"kubernetes.io/projected/4c3d9411-7fc2-408f-9197-2e4808f45735-kube-api-access-r5wn2\") on node \"crc\" DevicePath \"\"" Dec 08 09:51:04 crc kubenswrapper[4662]: I1208 09:51:04.715425 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c3d9411-7fc2-408f-9197-2e4808f45735" path="/var/lib/kubelet/pods/4c3d9411-7fc2-408f-9197-2e4808f45735/volumes" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.090434 4662 scope.go:117] "RemoveContainer" containerID="f22568040e8e14b308b62aec9d8139a6cd4a2c81c62042f88475986b47acd8d6" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.090502 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-bbktq" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.393078 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-qdmzh/crc-debug-2hrpj"] Dec 08 09:51:05 crc kubenswrapper[4662]: E1208 09:51:05.393435 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3d9411-7fc2-408f-9197-2e4808f45735" containerName="container-00" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.393446 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3d9411-7fc2-408f-9197-2e4808f45735" containerName="container-00" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.393622 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3d9411-7fc2-408f-9197-2e4808f45735" containerName="container-00" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.394192 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.399719 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-qdmzh"/"default-dockercfg-s7fgw" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.494674 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wsmt\" (UniqueName: \"kubernetes.io/projected/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-kube-api-access-6wsmt\") pod \"crc-debug-2hrpj\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.494836 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-host\") pod \"crc-debug-2hrpj\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.596644 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wsmt\" (UniqueName: \"kubernetes.io/projected/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-kube-api-access-6wsmt\") pod \"crc-debug-2hrpj\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.596815 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-host\") pod \"crc-debug-2hrpj\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.596880 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-host\") pod \"crc-debug-2hrpj\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.613619 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wsmt\" (UniqueName: \"kubernetes.io/projected/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-kube-api-access-6wsmt\") pod \"crc-debug-2hrpj\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:05 crc kubenswrapper[4662]: I1208 09:51:05.721383 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:06 crc kubenswrapper[4662]: I1208 09:51:06.101330 4662 generic.go:334] "Generic (PLEG): container finished" podID="1d8e9c63-4f22-4263-8a4e-4874d3145b5a" containerID="8e55b0fb3a1d9152e2afc55385ead0396318aefde1dc32f977f1b245c16971dd" exitCode=1 Dec 08 09:51:06 crc kubenswrapper[4662]: I1208 09:51:06.101515 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" event={"ID":"1d8e9c63-4f22-4263-8a4e-4874d3145b5a","Type":"ContainerDied","Data":"8e55b0fb3a1d9152e2afc55385ead0396318aefde1dc32f977f1b245c16971dd"} Dec 08 09:51:06 crc kubenswrapper[4662]: I1208 09:51:06.101690 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" event={"ID":"1d8e9c63-4f22-4263-8a4e-4874d3145b5a","Type":"ContainerStarted","Data":"aac0cd7d880ce394f4c2e7bcfeeb2c185278d2ee039d87cc12cb81672ec8d187"} Dec 08 09:51:06 crc kubenswrapper[4662]: I1208 09:51:06.138930 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qdmzh/crc-debug-2hrpj"] Dec 08 09:51:06 crc kubenswrapper[4662]: I1208 09:51:06.146386 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qdmzh/crc-debug-2hrpj"] Dec 08 09:51:07 crc kubenswrapper[4662]: I1208 09:51:07.198887 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:07 crc kubenswrapper[4662]: I1208 09:51:07.325905 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wsmt\" (UniqueName: \"kubernetes.io/projected/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-kube-api-access-6wsmt\") pod \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " Dec 08 09:51:07 crc kubenswrapper[4662]: I1208 09:51:07.326021 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-host\") pod \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\" (UID: \"1d8e9c63-4f22-4263-8a4e-4874d3145b5a\") " Dec 08 09:51:07 crc kubenswrapper[4662]: I1208 09:51:07.326140 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-host" (OuterVolumeSpecName: "host") pod "1d8e9c63-4f22-4263-8a4e-4874d3145b5a" (UID: "1d8e9c63-4f22-4263-8a4e-4874d3145b5a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 08 09:51:07 crc kubenswrapper[4662]: I1208 09:51:07.326472 4662 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-host\") on node \"crc\" DevicePath \"\"" Dec 08 09:51:07 crc kubenswrapper[4662]: I1208 09:51:07.342381 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-kube-api-access-6wsmt" (OuterVolumeSpecName: "kube-api-access-6wsmt") pod "1d8e9c63-4f22-4263-8a4e-4874d3145b5a" (UID: "1d8e9c63-4f22-4263-8a4e-4874d3145b5a"). InnerVolumeSpecName "kube-api-access-6wsmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:51:07 crc kubenswrapper[4662]: I1208 09:51:07.427967 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wsmt\" (UniqueName: \"kubernetes.io/projected/1d8e9c63-4f22-4263-8a4e-4874d3145b5a-kube-api-access-6wsmt\") on node \"crc\" DevicePath \"\"" Dec 08 09:51:08 crc kubenswrapper[4662]: I1208 09:51:08.118924 4662 scope.go:117] "RemoveContainer" containerID="8e55b0fb3a1d9152e2afc55385ead0396318aefde1dc32f977f1b245c16971dd" Dec 08 09:51:08 crc kubenswrapper[4662]: I1208 09:51:08.119024 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/crc-debug-2hrpj" Dec 08 09:51:08 crc kubenswrapper[4662]: I1208 09:51:08.708572 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8e9c63-4f22-4263-8a4e-4874d3145b5a" path="/var/lib/kubelet/pods/1d8e9c63-4f22-4263-8a4e-4874d3145b5a/volumes" Dec 08 09:51:12 crc kubenswrapper[4662]: I1208 09:51:12.698542 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:51:12 crc kubenswrapper[4662]: E1208 09:51:12.699382 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:51:25 crc kubenswrapper[4662]: I1208 09:51:25.698907 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:51:25 crc kubenswrapper[4662]: E1208 09:51:25.703441 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:51:37 crc kubenswrapper[4662]: I1208 09:51:37.698120 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:51:37 crc kubenswrapper[4662]: E1208 09:51:37.699888 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.105895 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c965fbb88-wxgll_0ad91a9d-af07-430b-985e-64a6077d6267/barbican-api/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.178568 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-6c965fbb88-wxgll_0ad91a9d-af07-430b-985e-64a6077d6267/barbican-api-log/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.324605 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-76455569b6-zfxpp_0cdcfce4-297a-4fd9-8854-2b3bd51fc592/barbican-keystone-listener/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.379454 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-76455569b6-zfxpp_0cdcfce4-297a-4fd9-8854-2b3bd51fc592/barbican-keystone-listener-log/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.526805 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d594947cf-hhppl_36f4c1e2-7f91-48fd-9258-d560df73bb4a/barbican-worker/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.580442 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-d594947cf-hhppl_36f4c1e2-7f91-48fd-9258-d560df73bb4a/barbican-worker-log/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.649881 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-9rhct_138459d2-75b3-467c-9bf3-7b458dc202ad/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.799516 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_792a4482-03d6-4850-a692-26fa0269fadf/ceilometer-central-agent/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.905005 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_792a4482-03d6-4850-a692-26fa0269fadf/ceilometer-notification-agent/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.918645 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_792a4482-03d6-4850-a692-26fa0269fadf/proxy-httpd/0.log" Dec 08 09:51:49 crc kubenswrapper[4662]: I1208 09:51:49.967976 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_792a4482-03d6-4850-a692-26fa0269fadf/sg-core/0.log" Dec 08 09:51:50 crc kubenswrapper[4662]: I1208 09:51:50.171478 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-d6xdh_2f22774e-b0ec-4317-8085-9ff29aed798d/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:50 crc kubenswrapper[4662]: I1208 09:51:50.213203 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_77f9af46-d962-49bc-96cb-5740adc30c48/cinder-api/0.log" Dec 08 09:51:50 crc kubenswrapper[4662]: I1208 09:51:50.260680 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_77f9af46-d962-49bc-96cb-5740adc30c48/cinder-api-log/0.log" Dec 08 09:51:50 crc kubenswrapper[4662]: I1208 09:51:50.441757 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3f43552a-574c-4fb3-811d-e264f0cec162/cinder-scheduler/0.log" Dec 08 09:51:50 crc kubenswrapper[4662]: I1208 09:51:50.734425 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_3f43552a-574c-4fb3-811d-e264f0cec162/probe/0.log" Dec 08 09:51:50 crc kubenswrapper[4662]: I1208 09:51:50.899289 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-q42zc_c4c27c90-d5f8-4cba-a2b6-8e8d39a8ae5c/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:50 crc kubenswrapper[4662]: I1208 09:51:50.977580 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-qxvw7_d0ca0a59-d64a-4ab0-8f16-a2192fecf3bd/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:51 crc kubenswrapper[4662]: I1208 09:51:51.084052 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8585fc4db5-l6l9w_8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a/init/0.log" Dec 08 09:51:51 crc kubenswrapper[4662]: I1208 09:51:51.295539 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8585fc4db5-l6l9w_8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a/init/0.log" Dec 08 09:51:51 crc kubenswrapper[4662]: I1208 09:51:51.342262 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-8585fc4db5-l6l9w_8a3672e3-a6bc-4a6f-8eb7-accfd9c46c5a/dnsmasq-dns/0.log" Dec 08 09:51:51 crc kubenswrapper[4662]: I1208 09:51:51.438055 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-46cwt_377fa4a0-70b5-48ee-a28a-1dd0eb481e68/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:51 crc kubenswrapper[4662]: I1208 09:51:51.579970 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6c86ffd5b9-ffgx7_bbc36380-3a09-4705-9c58-6795b96b8199/keystone-api/0.log" Dec 08 09:51:51 crc kubenswrapper[4662]: I1208 09:51:51.658778 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_6a6f2271-26f9-4108-b71c-539f915674f9/kube-state-metrics/0.log" Dec 08 09:51:51 crc kubenswrapper[4662]: I1208 09:51:51.933101 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d95b89f45-hlj4c_739c97af-0cc4-4ba9-8707-2d15947dda47/neutron-api/0.log" Dec 08 09:51:52 crc kubenswrapper[4662]: I1208 09:51:52.072576 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-5d95b89f45-hlj4c_739c97af-0cc4-4ba9-8707-2d15947dda47/neutron-httpd/0.log" Dec 08 09:51:52 crc kubenswrapper[4662]: I1208 09:51:52.411514 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_84735737-6d94-4bae-8932-3651b52a2b37/nova-api-api/0.log" Dec 08 09:51:52 crc kubenswrapper[4662]: I1208 09:51:52.521937 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_84735737-6d94-4bae-8932-3651b52a2b37/nova-api-log/0.log" Dec 08 09:51:52 crc kubenswrapper[4662]: I1208 09:51:52.700784 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:51:52 crc kubenswrapper[4662]: E1208 09:51:52.701003 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:51:52 crc kubenswrapper[4662]: I1208 09:51:52.752765 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_56181caf-3a5f-49f9-8041-05084a240a3a/nova-cell0-conductor-conductor/0.log" Dec 08 09:51:52 crc kubenswrapper[4662]: I1208 09:51:52.922117 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_a8d481ea-222a-4dbb-9292-e576334d6d45/nova-cell1-conductor-conductor/0.log" Dec 08 09:51:53 crc kubenswrapper[4662]: I1208 09:51:53.185259 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_8251fb8d-8798-4373-a7ae-2336eb6dc2d3/nova-cell1-novncproxy-novncproxy/0.log" Dec 08 09:51:53 crc kubenswrapper[4662]: I1208 09:51:53.469020 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_411bcfde-bb2d-4274-a486-42d84f76e1c2/nova-metadata-log/0.log" Dec 08 09:51:53 crc kubenswrapper[4662]: I1208 09:51:53.873825 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_e9acfb80-5f9e-4340-9681-95d7c325bfd2/nova-scheduler-scheduler/0.log" Dec 08 09:51:53 crc kubenswrapper[4662]: I1208 09:51:53.934027 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b06c22d2-e96f-445d-82d6-54f276df38c8/mysql-bootstrap/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.127376 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b06c22d2-e96f-445d-82d6-54f276df38c8/mysql-bootstrap/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.145347 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_411bcfde-bb2d-4274-a486-42d84f76e1c2/nova-metadata-metadata/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.229130 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b06c22d2-e96f-445d-82d6-54f276df38c8/galera/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.403647 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0b3100ca-3241-444e-b279-248592e848fe/mysql-bootstrap/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.681465 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_be8af0bd-5d4a-4a83-84f5-5687dfeaab59/openstackclient/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.685552 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0b3100ca-3241-444e-b279-248592e848fe/galera/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.753332 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_0b3100ca-3241-444e-b279-248592e848fe/mysql-bootstrap/0.log" Dec 08 09:51:54 crc kubenswrapper[4662]: I1208 09:51:54.910648 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-5wz6p_4760fc65-89b9-4df9-90ca-ab6e968955dd/openstack-network-exporter/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.064985 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rbsxq_27c667fc-f9ca-4305-aa3b-4be2ae723674/ovsdb-server-init/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.280862 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rbsxq_27c667fc-f9ca-4305-aa3b-4be2ae723674/ovsdb-server/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.297678 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rbsxq_27c667fc-f9ca-4305-aa3b-4be2ae723674/ovsdb-server-init/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.407679 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-rbsxq_27c667fc-f9ca-4305-aa3b-4be2ae723674/ovs-vswitchd/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.593903 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-tx8rx_4f875ff2-9f06-470b-89dd-2f6215a7e40c/ovn-controller/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.662040 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fc55a4a3-d846-465b-914e-225c9ee2bfc5/openstack-network-exporter/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.740824 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_fc55a4a3-d846-465b-914e-225c9ee2bfc5/ovn-northd/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.876096 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a965cd5f-6888-4033-9d26-02978e2e0f36/openstack-network-exporter/0.log" Dec 08 09:51:55 crc kubenswrapper[4662]: I1208 09:51:55.930364 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_a965cd5f-6888-4033-9d26-02978e2e0f36/ovsdbserver-nb/0.log" Dec 08 09:51:56 crc kubenswrapper[4662]: I1208 09:51:56.090580 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2d8d4f46-241c-490e-b219-2600ed0a74c5/openstack-network-exporter/0.log" Dec 08 09:51:56 crc kubenswrapper[4662]: I1208 09:51:56.232673 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_2d8d4f46-241c-490e-b219-2600ed0a74c5/ovsdbserver-sb/0.log" Dec 08 09:51:56 crc kubenswrapper[4662]: I1208 09:51:56.303775 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6775bc75d4-c5zmq_0dcf5d14-7976-45be-bc8a-2a551cf2babc/placement-api/0.log" Dec 08 09:51:56 crc kubenswrapper[4662]: I1208 09:51:56.382788 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6775bc75d4-c5zmq_0dcf5d14-7976-45be-bc8a-2a551cf2babc/placement-log/0.log" Dec 08 09:51:56 crc kubenswrapper[4662]: I1208 09:51:56.590201 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e86949df-33b5-4ea8-86fc-d8a9ed982826/setup-container/0.log" Dec 08 09:51:56 crc kubenswrapper[4662]: I1208 09:51:56.926417 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e86949df-33b5-4ea8-86fc-d8a9ed982826/setup-container/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.008361 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_e86949df-33b5-4ea8-86fc-d8a9ed982826/rabbitmq/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.075893 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7866efc-4d7d-4d74-907b-e01dbdeaefaa/setup-container/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.224137 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7866efc-4d7d-4d74-907b-e01dbdeaefaa/setup-container/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.275181 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_a7866efc-4d7d-4d74-907b-e01dbdeaefaa/rabbitmq/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.377480 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-bqpsc_3deffeb8-98a7-4f85-aa26-705bb171d886/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.542403 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-x5bs8_13e5cc43-bfc2-4f92-a602-48e510e7f9fe/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.647823 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-pzvmm_0aa15dee-b462-4ee2-89cc-00a38071db66/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:57 crc kubenswrapper[4662]: I1208 09:51:57.818002 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-nhcfd_9b8cd462-55c7-451b-a985-85606ca5374b/ssh-known-hosts-edpm-deployment/0.log" Dec 08 09:51:58 crc kubenswrapper[4662]: I1208 09:51:58.083885 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-p6t6n_c10c6537-3808-47c8-bf89-01e1bff9f51e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Dec 08 09:51:59 crc kubenswrapper[4662]: I1208 09:51:59.542991 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0740d08e-8e81-4133-9969-7b777cfef0f7/memcached/0.log" Dec 08 09:52:06 crc kubenswrapper[4662]: I1208 09:52:06.706595 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:52:06 crc kubenswrapper[4662]: E1208 09:52:06.707250 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:52:19 crc kubenswrapper[4662]: I1208 09:52:19.956294 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-pnmbz_022626c4-d3b3-4c80-884c-6ae24361955a/kube-rbac-proxy/0.log" Dec 08 09:52:19 crc kubenswrapper[4662]: I1208 09:52:19.999921 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7d9dfd778-pnmbz_022626c4-d3b3-4c80-884c-6ae24361955a/manager/0.log" Dec 08 09:52:20 crc kubenswrapper[4662]: I1208 09:52:20.175756 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-l9m5c_b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c/manager/0.log" Dec 08 09:52:20 crc kubenswrapper[4662]: I1208 09:52:20.213855 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-6c677c69b-l9m5c_b3223d1b-cf63-4ac1-ab6a-ce38f6bd2b6c/kube-rbac-proxy/0.log" Dec 08 09:52:20 crc kubenswrapper[4662]: I1208 09:52:20.387237 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf_359a63b7-e36a-4741-a91f-545218de47a5/util/0.log" Dec 08 09:52:20 crc kubenswrapper[4662]: I1208 09:52:20.588894 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf_359a63b7-e36a-4741-a91f-545218de47a5/pull/0.log" Dec 08 09:52:20 crc kubenswrapper[4662]: I1208 09:52:20.615553 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf_359a63b7-e36a-4741-a91f-545218de47a5/util/0.log" Dec 08 09:52:20 crc kubenswrapper[4662]: I1208 09:52:20.684428 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf_359a63b7-e36a-4741-a91f-545218de47a5/pull/0.log" Dec 08 09:52:20 crc kubenswrapper[4662]: I1208 09:52:20.842771 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf_359a63b7-e36a-4741-a91f-545218de47a5/pull/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.065818 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf_359a63b7-e36a-4741-a91f-545218de47a5/extract/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.069449 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_dd3606a28663ccaa876196baea210b113ba684334fa9aca8b1eaa52ef0f2srf_359a63b7-e36a-4741-a91f-545218de47a5/util/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.257659 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-wr5zg_dda05715-875a-41d4-9ee6-c81406a965a9/kube-rbac-proxy/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.286135 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-697fb699cf-wr5zg_dda05715-875a-41d4-9ee6-c81406a965a9/manager/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.441436 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-hrlb9_e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe/kube-rbac-proxy/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.479366 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-5697bb5779-hrlb9_e594b5f0-0aac-4a5f-ba9e-5849c57f2cfe/manager/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.501373 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-zsdxn_0fcb859f-b723-4629-902c-68696b4b8995/kube-rbac-proxy/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.665721 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-4kwdx_3882e308-ba7b-48c8-94f0-354b3926c925/kube-rbac-proxy/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.692511 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5f64f6f8bb-zsdxn_0fcb859f-b723-4629-902c-68696b4b8995/manager/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.697406 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:52:21 crc kubenswrapper[4662]: E1208 09:52:21.697621 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.733679 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-68c6d99b8f-4kwdx_3882e308-ba7b-48c8-94f0-354b3926c925/manager/0.log" Dec 08 09:52:21 crc kubenswrapper[4662]: I1208 09:52:21.877247 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-wgcnt_a51352cf-c6f2-40cc-9b72-035737c28e0e/kube-rbac-proxy/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.105545 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-78d48bff9d-wgcnt_a51352cf-c6f2-40cc-9b72-035737c28e0e/manager/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.125646 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-85425_5a541aca-2d5d-432d-a375-d639af4927ee/kube-rbac-proxy/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.154393 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-967d97867-85425_5a541aca-2d5d-432d-a375-d639af4927ee/manager/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.360492 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-8zz9f_60c3c9e4-042b-445f-96fe-7d4583ae29ee/kube-rbac-proxy/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.447994 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-7765d96ddf-8zz9f_60c3c9e4-042b-445f-96fe-7d4583ae29ee/manager/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.500845 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-jgbb9_41089106-3be5-42f3-9c98-76ddb4e0a32c/kube-rbac-proxy/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.573129 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-5b5fd79c9c-jgbb9_41089106-3be5-42f3-9c98-76ddb4e0a32c/manager/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.668713 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-hhz55_2c718d76-ffa3-479f-972d-451437ca9b8e/kube-rbac-proxy/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.735891 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-79c8c4686c-hhz55_2c718d76-ffa3-479f-972d-451437ca9b8e/manager/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.938424 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-k7nhg_edef4f76-66e3-4431-8f3e-15b1be7dc525/kube-rbac-proxy/0.log" Dec 08 09:52:22 crc kubenswrapper[4662]: I1208 09:52:22.946443 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-5fdfd5b6b5-k7nhg_edef4f76-66e3-4431-8f3e-15b1be7dc525/manager/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.053433 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-qtdrq_00caf547-24eb-4e92-9294-900fbf53f068/kube-rbac-proxy/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.195599 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-697bc559fc-qtdrq_00caf547-24eb-4e92-9294-900fbf53f068/manager/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.245588 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-9b5bw_99cf0df3-a30d-4a1b-aa55-d5a814afd119/kube-rbac-proxy/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.340423 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-998648c74-9b5bw_99cf0df3-a30d-4a1b-aa55-d5a814afd119/manager/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.467867 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fsnfnd_078c0bad-4b25-4cdf-8d11-abfa0430137c/kube-rbac-proxy/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.471886 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-84b575879fsnfnd_078c0bad-4b25-4cdf-8d11-abfa0430137c/manager/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.853662 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-5974cc6b8d-bqsmw_0e25126d-5ab8-4691-aa80-352149bc813b/operator/0.log" Dec 08 09:52:23 crc kubenswrapper[4662]: I1208 09:52:23.971802 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-l2kgc_4fa2b2d9-9c1a-411e-a7d8-b3ad321598f5/registry-server/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.159162 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-8hlzg_08f788cf-75bb-4beb-bb4e-f9fd39c18972/kube-rbac-proxy/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.385251 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-b6456fdb6-8hlzg_08f788cf-75bb-4beb-bb4e-f9fd39c18972/manager/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.402087 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-v5zsp_92ae4959-7652-4be1-9349-d1a1fbb32d68/kube-rbac-proxy/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.518006 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-8999f4b55-mhd69_91ef13a3-dfb6-4f20-8b33-f9cbb9c60c60/manager/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.623190 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-78f8948974-v5zsp_92ae4959-7652-4be1-9349-d1a1fbb32d68/manager/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.638800 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-v66fn_4ea8fed4-0dca-430b-bb22-8a7e8fdee0b8/operator/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.745581 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-kh2rv_7a4111f6-9632-4997-a71b-a514f200b5cf/kube-rbac-proxy/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.837965 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9d58d64bc-kh2rv_7a4111f6-9632-4997-a71b-a514f200b5cf/manager/0.log" Dec 08 09:52:24 crc kubenswrapper[4662]: I1208 09:52:24.921028 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-jnfw7_606d0c5a-d4b6-46c3-9cd8-367895904823/kube-rbac-proxy/0.log" Dec 08 09:52:25 crc kubenswrapper[4662]: I1208 09:52:25.030964 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-58d5ff84df-jnfw7_606d0c5a-d4b6-46c3-9cd8-367895904823/manager/0.log" Dec 08 09:52:25 crc kubenswrapper[4662]: I1208 09:52:25.073547 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-knpnl_bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2/kube-rbac-proxy/0.log" Dec 08 09:52:25 crc kubenswrapper[4662]: I1208 09:52:25.078405 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-5854674fcc-knpnl_bf88a9c5-bcb3-4be6-be18-8418e7ac3fc2/manager/0.log" Dec 08 09:52:25 crc kubenswrapper[4662]: I1208 09:52:25.165491 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-6xkzt_19e2da99-64f8-48f3-974b-5a33bdbe683d/kube-rbac-proxy/0.log" Dec 08 09:52:25 crc kubenswrapper[4662]: I1208 09:52:25.211465 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-667bd8d554-6xkzt_19e2da99-64f8-48f3-974b-5a33bdbe683d/manager/0.log" Dec 08 09:52:32 crc kubenswrapper[4662]: I1208 09:52:32.698148 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:52:32 crc kubenswrapper[4662]: E1208 09:52:32.700231 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:52:43 crc kubenswrapper[4662]: I1208 09:52:43.700094 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:52:43 crc kubenswrapper[4662]: E1208 09:52:43.701675 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:52:45 crc kubenswrapper[4662]: I1208 09:52:45.748957 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-9l4m9_c47e99e0-a11e-4b1f-a6c1-f9ec2d3e4d70/control-plane-machine-set-operator/0.log" Dec 08 09:52:46 crc kubenswrapper[4662]: I1208 09:52:46.003862 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5kmdj_637eec7a-5d24-47b7-a111-ceaf0a27ebc1/machine-api-operator/0.log" Dec 08 09:52:46 crc kubenswrapper[4662]: I1208 09:52:46.014265 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5kmdj_637eec7a-5d24-47b7-a111-ceaf0a27ebc1/kube-rbac-proxy/0.log" Dec 08 09:52:58 crc kubenswrapper[4662]: I1208 09:52:58.697692 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:52:58 crc kubenswrapper[4662]: E1208 09:52:58.698459 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:52:59 crc kubenswrapper[4662]: I1208 09:52:59.401848 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-rxxpp_befccb54-8a05-49dd-b709-b38fbdbd9a04/cert-manager-controller/0.log" Dec 08 09:52:59 crc kubenswrapper[4662]: I1208 09:52:59.573832 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-rdgcp_562e5ac7-c24f-4122-8b58-335957d2545c/cert-manager-cainjector/0.log" Dec 08 09:52:59 crc kubenswrapper[4662]: I1208 09:52:59.623735 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-v4r7x_081cce8e-c3af-41d8-9146-5d62bbe487b8/cert-manager-webhook/0.log" Dec 08 09:53:10 crc kubenswrapper[4662]: I1208 09:53:10.700514 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:53:10 crc kubenswrapper[4662]: E1208 09:53:10.701302 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:53:12 crc kubenswrapper[4662]: I1208 09:53:12.962359 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7fbb5f6569-wk2lb_a0a30b59-aa4e-44e3-8ea6-1b95eaa2caa9/nmstate-console-plugin/0.log" Dec 08 09:53:13 crc kubenswrapper[4662]: I1208 09:53:13.328751 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-9gzt4_5ba14e04-5fa6-4c63-b8a1-4138df25d0ce/nmstate-handler/0.log" Dec 08 09:53:13 crc kubenswrapper[4662]: I1208 09:53:13.449720 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-v2z8b_1c33596c-571d-4ed0-ab96-408c6246dde3/kube-rbac-proxy/0.log" Dec 08 09:53:13 crc kubenswrapper[4662]: I1208 09:53:13.524778 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-7f946cbc9-v2z8b_1c33596c-571d-4ed0-ab96-408c6246dde3/nmstate-metrics/0.log" Dec 08 09:53:13 crc kubenswrapper[4662]: I1208 09:53:13.704122 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-5b5b58f5c8-p4d52_64cd0b32-387f-4149-936a-43b7dac53247/nmstate-operator/0.log" Dec 08 09:53:13 crc kubenswrapper[4662]: I1208 09:53:13.739726 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f6d4c5ccb-d4mb7_1adc5600-df22-4b2d-b6cf-2e044117c530/nmstate-webhook/0.log" Dec 08 09:53:23 crc kubenswrapper[4662]: I1208 09:53:23.697864 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:53:23 crc kubenswrapper[4662]: E1208 09:53:23.698624 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.080173 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-hqc7f_5cead70a-4652-4695-87aa-ef3d3ecb419d/kube-rbac-proxy/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.227514 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-f8648f98b-hqc7f_5cead70a-4652-4695-87aa-ef3d3ecb419d/controller/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.309688 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-frr-files/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.559455 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-frr-files/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.597200 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-reloader/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.609878 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-metrics/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.627152 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-reloader/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.851303 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-reloader/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.878794 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-frr-files/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.932205 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-metrics/0.log" Dec 08 09:53:29 crc kubenswrapper[4662]: I1208 09:53:29.949675 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-metrics/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.106155 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-frr-files/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.176791 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/controller/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.189622 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-reloader/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.221901 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/cp-metrics/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.423388 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/frr-metrics/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.495703 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/kube-rbac-proxy/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.513801 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/kube-rbac-proxy-frr/0.log" Dec 08 09:53:30 crc kubenswrapper[4662]: I1208 09:53:30.668331 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/reloader/0.log" Dec 08 09:53:31 crc kubenswrapper[4662]: I1208 09:53:31.014964 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-d84ss_77b8b61e-ff7a-424e-bbd8-9f20ce485c51/frr/0.log" Dec 08 09:53:31 crc kubenswrapper[4662]: I1208 09:53:31.052285 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7fcb986d4-llfmt_855c2f04-3def-48ad-b73c-535485327343/frr-k8s-webhook-server/0.log" Dec 08 09:53:31 crc kubenswrapper[4662]: I1208 09:53:31.066566 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6bc95b94b-xrzfb_05682e28-5a70-4569-ac16-8cc0f3f17c39/manager/0.log" Dec 08 09:53:31 crc kubenswrapper[4662]: I1208 09:53:31.262861 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-77b87b85c5-226hm_cd084a40-bf8b-4764-ba4f-c587c5132b76/webhook-server/0.log" Dec 08 09:53:31 crc kubenswrapper[4662]: I1208 09:53:31.338238 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4q6hz_52a4fbf9-995c-4926-8d43-21adb4a9455d/kube-rbac-proxy/0.log" Dec 08 09:53:31 crc kubenswrapper[4662]: I1208 09:53:31.740649 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-4q6hz_52a4fbf9-995c-4926-8d43-21adb4a9455d/speaker/0.log" Dec 08 09:53:38 crc kubenswrapper[4662]: I1208 09:53:38.698572 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:53:38 crc kubenswrapper[4662]: E1208 09:53:38.699157 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:53:44 crc kubenswrapper[4662]: I1208 09:53:44.514951 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf_bb54475d-40fd-4b72-9936-3b9de9625c8e/util/0.log" Dec 08 09:53:44 crc kubenswrapper[4662]: I1208 09:53:44.734929 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf_bb54475d-40fd-4b72-9936-3b9de9625c8e/util/0.log" Dec 08 09:53:44 crc kubenswrapper[4662]: I1208 09:53:44.780680 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf_bb54475d-40fd-4b72-9936-3b9de9625c8e/pull/0.log" Dec 08 09:53:44 crc kubenswrapper[4662]: I1208 09:53:44.844240 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf_bb54475d-40fd-4b72-9936-3b9de9625c8e/pull/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.001683 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf_bb54475d-40fd-4b72-9936-3b9de9625c8e/extract/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.017906 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf_bb54475d-40fd-4b72-9936-3b9de9625c8e/util/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.035458 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_5064f9f8917b246f69f5d7fc025e7e6c34236c02bca31167615d38212fp4psf_bb54475d-40fd-4b72-9936-3b9de9625c8e/pull/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.192129 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc_1de6e66b-f63f-4f47-b38d-52a2ff32ce38/util/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.360872 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc_1de6e66b-f63f-4f47-b38d-52a2ff32ce38/pull/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.387374 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc_1de6e66b-f63f-4f47-b38d-52a2ff32ce38/pull/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.398703 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc_1de6e66b-f63f-4f47-b38d-52a2ff32ce38/util/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.565771 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc_1de6e66b-f63f-4f47-b38d-52a2ff32ce38/util/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.597705 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc_1de6e66b-f63f-4f47-b38d-52a2ff32ce38/pull/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.607272 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_af69698b82fdf008f5ff9e195cf808a654240e16b13dcd924b74994f83gtwmc_1de6e66b-f63f-4f47-b38d-52a2ff32ce38/extract/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.776753 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lxnn_ae8e99c2-28da-435a-b3bf-f3b7e71f783c/extract-utilities/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.947947 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lxnn_ae8e99c2-28da-435a-b3bf-f3b7e71f783c/extract-content/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.963146 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lxnn_ae8e99c2-28da-435a-b3bf-f3b7e71f783c/extract-content/0.log" Dec 08 09:53:45 crc kubenswrapper[4662]: I1208 09:53:45.985102 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lxnn_ae8e99c2-28da-435a-b3bf-f3b7e71f783c/extract-utilities/0.log" Dec 08 09:53:46 crc kubenswrapper[4662]: I1208 09:53:46.276053 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lxnn_ae8e99c2-28da-435a-b3bf-f3b7e71f783c/extract-utilities/0.log" Dec 08 09:53:46 crc kubenswrapper[4662]: I1208 09:53:46.341556 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lxnn_ae8e99c2-28da-435a-b3bf-f3b7e71f783c/extract-content/0.log" Dec 08 09:53:46 crc kubenswrapper[4662]: I1208 09:53:46.573411 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-8lxnn_ae8e99c2-28da-435a-b3bf-f3b7e71f783c/registry-server/0.log" Dec 08 09:53:46 crc kubenswrapper[4662]: I1208 09:53:46.601356 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-smctn_09a4e6a7-2384-4ead-a4c7-396ff35e0bee/extract-utilities/0.log" Dec 08 09:53:46 crc kubenswrapper[4662]: I1208 09:53:46.806544 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-smctn_09a4e6a7-2384-4ead-a4c7-396ff35e0bee/extract-content/0.log" Dec 08 09:53:46 crc kubenswrapper[4662]: I1208 09:53:46.815909 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-smctn_09a4e6a7-2384-4ead-a4c7-396ff35e0bee/extract-content/0.log" Dec 08 09:53:46 crc kubenswrapper[4662]: I1208 09:53:46.835040 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-smctn_09a4e6a7-2384-4ead-a4c7-396ff35e0bee/extract-utilities/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.054369 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-smctn_09a4e6a7-2384-4ead-a4c7-396ff35e0bee/extract-utilities/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.056546 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-smctn_09a4e6a7-2384-4ead-a4c7-396ff35e0bee/extract-content/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.340141 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-k8mdp_8de480c6-7855-45e8-91ad-574e204414ae/marketplace-operator/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.349119 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-smctn_09a4e6a7-2384-4ead-a4c7-396ff35e0bee/registry-server/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.481295 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sc8bf_7473f855-7fa1-44f3-8841-6041a045c35a/extract-utilities/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.575214 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sc8bf_7473f855-7fa1-44f3-8841-6041a045c35a/extract-utilities/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.660419 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sc8bf_7473f855-7fa1-44f3-8841-6041a045c35a/extract-content/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.672500 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sc8bf_7473f855-7fa1-44f3-8841-6041a045c35a/extract-content/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.841058 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sc8bf_7473f855-7fa1-44f3-8841-6041a045c35a/extract-content/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.891047 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sc8bf_7473f855-7fa1-44f3-8841-6041a045c35a/extract-utilities/0.log" Dec 08 09:53:47 crc kubenswrapper[4662]: I1208 09:53:47.915315 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-sc8bf_7473f855-7fa1-44f3-8841-6041a045c35a/registry-server/0.log" Dec 08 09:53:48 crc kubenswrapper[4662]: I1208 09:53:48.060968 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5dfj_2b48c93a-7bd3-472e-b21d-59eb414549d1/extract-utilities/0.log" Dec 08 09:53:48 crc kubenswrapper[4662]: I1208 09:53:48.351838 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5dfj_2b48c93a-7bd3-472e-b21d-59eb414549d1/extract-content/0.log" Dec 08 09:53:48 crc kubenswrapper[4662]: I1208 09:53:48.353583 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5dfj_2b48c93a-7bd3-472e-b21d-59eb414549d1/extract-utilities/0.log" Dec 08 09:53:48 crc kubenswrapper[4662]: I1208 09:53:48.364971 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5dfj_2b48c93a-7bd3-472e-b21d-59eb414549d1/extract-content/0.log" Dec 08 09:53:48 crc kubenswrapper[4662]: I1208 09:53:48.527084 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5dfj_2b48c93a-7bd3-472e-b21d-59eb414549d1/extract-utilities/0.log" Dec 08 09:53:48 crc kubenswrapper[4662]: I1208 09:53:48.600464 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5dfj_2b48c93a-7bd3-472e-b21d-59eb414549d1/extract-content/0.log" Dec 08 09:53:48 crc kubenswrapper[4662]: I1208 09:53:48.731922 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-j5dfj_2b48c93a-7bd3-472e-b21d-59eb414549d1/registry-server/0.log" Dec 08 09:53:52 crc kubenswrapper[4662]: I1208 09:53:52.703965 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:53:52 crc kubenswrapper[4662]: E1208 09:53:52.704636 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:54:07 crc kubenswrapper[4662]: I1208 09:54:07.697971 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:54:07 crc kubenswrapper[4662]: E1208 09:54:07.698555 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:54:22 crc kubenswrapper[4662]: I1208 09:54:22.697885 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:54:22 crc kubenswrapper[4662]: E1208 09:54:22.698821 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:54:35 crc kubenswrapper[4662]: I1208 09:54:35.698091 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:54:35 crc kubenswrapper[4662]: E1208 09:54:35.698879 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:54:50 crc kubenswrapper[4662]: I1208 09:54:50.701375 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:54:50 crc kubenswrapper[4662]: E1208 09:54:50.702259 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:55:03 crc kubenswrapper[4662]: I1208 09:55:03.696959 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:55:03 crc kubenswrapper[4662]: E1208 09:55:03.697871 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:55:18 crc kubenswrapper[4662]: I1208 09:55:18.697110 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:55:18 crc kubenswrapper[4662]: E1208 09:55:18.697927 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:55:30 crc kubenswrapper[4662]: I1208 09:55:30.697579 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:55:30 crc kubenswrapper[4662]: E1208 09:55:30.698284 4662 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-5dzps_openshift-machine-config-operator(0e629796-86fa-4436-8a01-326fc70c7dc1)\"" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" Dec 08 09:55:43 crc kubenswrapper[4662]: I1208 09:55:43.697966 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 09:55:44 crc kubenswrapper[4662]: I1208 09:55:44.399045 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"2e29e3cd82914bc72ad862d3f34f10f62625c6c1558dc9e8796600fcac6c2a6a"} Dec 08 09:55:45 crc kubenswrapper[4662]: I1208 09:55:45.413616 4662 generic.go:334] "Generic (PLEG): container finished" podID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerID="8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71" exitCode=0 Dec 08 09:55:45 crc kubenswrapper[4662]: I1208 09:55:45.413700 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" event={"ID":"ef624d5b-6078-4049-b3a4-3e9cbbe2730f","Type":"ContainerDied","Data":"8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71"} Dec 08 09:55:45 crc kubenswrapper[4662]: I1208 09:55:45.416198 4662 scope.go:117] "RemoveContainer" containerID="8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71" Dec 08 09:55:45 crc kubenswrapper[4662]: I1208 09:55:45.997608 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qdmzh_must-gather-fqvzr_ef624d5b-6078-4049-b3a4-3e9cbbe2730f/gather/0.log" Dec 08 09:55:53 crc kubenswrapper[4662]: I1208 09:55:53.942905 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-qdmzh/must-gather-fqvzr"] Dec 08 09:55:53 crc kubenswrapper[4662]: I1208 09:55:53.943756 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerName="copy" containerID="cri-o://c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7" gracePeriod=2 Dec 08 09:55:53 crc kubenswrapper[4662]: I1208 09:55:53.959389 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-qdmzh/must-gather-fqvzr"] Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.355640 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qdmzh_must-gather-fqvzr_ef624d5b-6078-4049-b3a4-3e9cbbe2730f/copy/0.log" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.356309 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.456821 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5n66\" (UniqueName: \"kubernetes.io/projected/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-kube-api-access-x5n66\") pod \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.457046 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-must-gather-output\") pod \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\" (UID: \"ef624d5b-6078-4049-b3a4-3e9cbbe2730f\") " Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.465939 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-kube-api-access-x5n66" (OuterVolumeSpecName: "kube-api-access-x5n66") pod "ef624d5b-6078-4049-b3a4-3e9cbbe2730f" (UID: "ef624d5b-6078-4049-b3a4-3e9cbbe2730f"). InnerVolumeSpecName "kube-api-access-x5n66". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.487478 4662 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-qdmzh_must-gather-fqvzr_ef624d5b-6078-4049-b3a4-3e9cbbe2730f/copy/0.log" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.488052 4662 generic.go:334] "Generic (PLEG): container finished" podID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerID="c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7" exitCode=143 Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.488155 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-qdmzh/must-gather-fqvzr" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.488172 4662 scope.go:117] "RemoveContainer" containerID="c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.515540 4662 scope.go:117] "RemoveContainer" containerID="8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.559529 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5n66\" (UniqueName: \"kubernetes.io/projected/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-kube-api-access-x5n66\") on node \"crc\" DevicePath \"\"" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.595069 4662 scope.go:117] "RemoveContainer" containerID="c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7" Dec 08 09:55:54 crc kubenswrapper[4662]: E1208 09:55:54.595904 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7\": container with ID starting with c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7 not found: ID does not exist" containerID="c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.596001 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7"} err="failed to get container status \"c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7\": rpc error: code = NotFound desc = could not find container \"c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7\": container with ID starting with c7e36ed076fbb699701681aff69c625fe9dcb1e4e22044178e529973a390dfe7 not found: ID does not exist" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.596096 4662 scope.go:117] "RemoveContainer" containerID="8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71" Dec 08 09:55:54 crc kubenswrapper[4662]: E1208 09:55:54.596447 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71\": container with ID starting with 8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71 not found: ID does not exist" containerID="8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.596523 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71"} err="failed to get container status \"8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71\": rpc error: code = NotFound desc = could not find container \"8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71\": container with ID starting with 8f754b2c776a90e9f164212b8419a389d7940f36da0ec7f3ab92ad0ee49c2b71 not found: ID does not exist" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.628008 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "ef624d5b-6078-4049-b3a4-3e9cbbe2730f" (UID: "ef624d5b-6078-4049-b3a4-3e9cbbe2730f"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.661130 4662 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/ef624d5b-6078-4049-b3a4-3e9cbbe2730f-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 09:55:54 crc kubenswrapper[4662]: I1208 09:55:54.711139 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" path="/var/lib/kubelet/pods/ef624d5b-6078-4049-b3a4-3e9cbbe2730f/volumes" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.054529 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hx87h"] Dec 08 09:56:02 crc kubenswrapper[4662]: E1208 09:56:02.056113 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerName="gather" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.056133 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerName="gather" Dec 08 09:56:02 crc kubenswrapper[4662]: E1208 09:56:02.056149 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8e9c63-4f22-4263-8a4e-4874d3145b5a" containerName="container-00" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.056157 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8e9c63-4f22-4263-8a4e-4874d3145b5a" containerName="container-00" Dec 08 09:56:02 crc kubenswrapper[4662]: E1208 09:56:02.056302 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerName="copy" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.056345 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerName="copy" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.056570 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8e9c63-4f22-4263-8a4e-4874d3145b5a" containerName="container-00" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.056597 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerName="copy" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.056612 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef624d5b-6078-4049-b3a4-3e9cbbe2730f" containerName="gather" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.061033 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.071378 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hx87h"] Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.222862 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmgk9\" (UniqueName: \"kubernetes.io/projected/108fa642-b3cb-402e-8199-b8e6062883b6-kube-api-access-tmgk9\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.228263 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-utilities\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.228538 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-catalog-content\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.330543 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-utilities\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.331230 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-catalog-content\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.331640 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmgk9\" (UniqueName: \"kubernetes.io/projected/108fa642-b3cb-402e-8199-b8e6062883b6-kube-api-access-tmgk9\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.331521 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-catalog-content\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.331175 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-utilities\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.353502 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmgk9\" (UniqueName: \"kubernetes.io/projected/108fa642-b3cb-402e-8199-b8e6062883b6-kube-api-access-tmgk9\") pod \"certified-operators-hx87h\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:02 crc kubenswrapper[4662]: I1208 09:56:02.458292 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:03 crc kubenswrapper[4662]: I1208 09:56:03.134105 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hx87h"] Dec 08 09:56:03 crc kubenswrapper[4662]: I1208 09:56:03.601983 4662 generic.go:334] "Generic (PLEG): container finished" podID="108fa642-b3cb-402e-8199-b8e6062883b6" containerID="34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b" exitCode=0 Dec 08 09:56:03 crc kubenswrapper[4662]: I1208 09:56:03.602032 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx87h" event={"ID":"108fa642-b3cb-402e-8199-b8e6062883b6","Type":"ContainerDied","Data":"34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b"} Dec 08 09:56:03 crc kubenswrapper[4662]: I1208 09:56:03.602055 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx87h" event={"ID":"108fa642-b3cb-402e-8199-b8e6062883b6","Type":"ContainerStarted","Data":"f9ab675ebdcea7f89d77e2471f266e945fd478451dc4928983f983a5280ac7e0"} Dec 08 09:56:03 crc kubenswrapper[4662]: I1208 09:56:03.604677 4662 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 09:56:04 crc kubenswrapper[4662]: I1208 09:56:04.611639 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx87h" event={"ID":"108fa642-b3cb-402e-8199-b8e6062883b6","Type":"ContainerStarted","Data":"69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072"} Dec 08 09:56:05 crc kubenswrapper[4662]: I1208 09:56:05.632292 4662 generic.go:334] "Generic (PLEG): container finished" podID="108fa642-b3cb-402e-8199-b8e6062883b6" containerID="69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072" exitCode=0 Dec 08 09:56:05 crc kubenswrapper[4662]: I1208 09:56:05.632418 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx87h" event={"ID":"108fa642-b3cb-402e-8199-b8e6062883b6","Type":"ContainerDied","Data":"69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072"} Dec 08 09:56:06 crc kubenswrapper[4662]: I1208 09:56:06.642952 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx87h" event={"ID":"108fa642-b3cb-402e-8199-b8e6062883b6","Type":"ContainerStarted","Data":"b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94"} Dec 08 09:56:12 crc kubenswrapper[4662]: I1208 09:56:12.458776 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:12 crc kubenswrapper[4662]: I1208 09:56:12.459298 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:12 crc kubenswrapper[4662]: I1208 09:56:12.507540 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:12 crc kubenswrapper[4662]: I1208 09:56:12.526605 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hx87h" podStartSLOduration=7.882235632 podStartE2EDuration="10.526584975s" podCreationTimestamp="2025-12-08 09:56:02 +0000 UTC" firstStartedPulling="2025-12-08 09:56:03.604423411 +0000 UTC m=+2487.173451401" lastFinishedPulling="2025-12-08 09:56:06.248772754 +0000 UTC m=+2489.817800744" observedRunningTime="2025-12-08 09:56:06.683890026 +0000 UTC m=+2490.252918056" watchObservedRunningTime="2025-12-08 09:56:12.526584975 +0000 UTC m=+2496.095612975" Dec 08 09:56:12 crc kubenswrapper[4662]: I1208 09:56:12.736458 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:12 crc kubenswrapper[4662]: I1208 09:56:12.787722 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hx87h"] Dec 08 09:56:14 crc kubenswrapper[4662]: I1208 09:56:14.711071 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hx87h" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="registry-server" containerID="cri-o://b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94" gracePeriod=2 Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.396283 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.588826 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-catalog-content\") pod \"108fa642-b3cb-402e-8199-b8e6062883b6\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.588949 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-utilities\") pod \"108fa642-b3cb-402e-8199-b8e6062883b6\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.588994 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmgk9\" (UniqueName: \"kubernetes.io/projected/108fa642-b3cb-402e-8199-b8e6062883b6-kube-api-access-tmgk9\") pod \"108fa642-b3cb-402e-8199-b8e6062883b6\" (UID: \"108fa642-b3cb-402e-8199-b8e6062883b6\") " Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.590082 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-utilities" (OuterVolumeSpecName: "utilities") pod "108fa642-b3cb-402e-8199-b8e6062883b6" (UID: "108fa642-b3cb-402e-8199-b8e6062883b6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.595925 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108fa642-b3cb-402e-8199-b8e6062883b6-kube-api-access-tmgk9" (OuterVolumeSpecName: "kube-api-access-tmgk9") pod "108fa642-b3cb-402e-8199-b8e6062883b6" (UID: "108fa642-b3cb-402e-8199-b8e6062883b6"). InnerVolumeSpecName "kube-api-access-tmgk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.636527 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "108fa642-b3cb-402e-8199-b8e6062883b6" (UID: "108fa642-b3cb-402e-8199-b8e6062883b6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.690979 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.691256 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/108fa642-b3cb-402e-8199-b8e6062883b6-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.691319 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmgk9\" (UniqueName: \"kubernetes.io/projected/108fa642-b3cb-402e-8199-b8e6062883b6-kube-api-access-tmgk9\") on node \"crc\" DevicePath \"\"" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.721158 4662 generic.go:334] "Generic (PLEG): container finished" podID="108fa642-b3cb-402e-8199-b8e6062883b6" containerID="b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94" exitCode=0 Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.721207 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx87h" event={"ID":"108fa642-b3cb-402e-8199-b8e6062883b6","Type":"ContainerDied","Data":"b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94"} Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.721238 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hx87h" event={"ID":"108fa642-b3cb-402e-8199-b8e6062883b6","Type":"ContainerDied","Data":"f9ab675ebdcea7f89d77e2471f266e945fd478451dc4928983f983a5280ac7e0"} Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.721260 4662 scope.go:117] "RemoveContainer" containerID="b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.721419 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hx87h" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.773993 4662 scope.go:117] "RemoveContainer" containerID="69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.778913 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hx87h"] Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.788578 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hx87h"] Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.803043 4662 scope.go:117] "RemoveContainer" containerID="34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.830102 4662 scope.go:117] "RemoveContainer" containerID="b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94" Dec 08 09:56:15 crc kubenswrapper[4662]: E1208 09:56:15.830587 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94\": container with ID starting with b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94 not found: ID does not exist" containerID="b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.830624 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94"} err="failed to get container status \"b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94\": rpc error: code = NotFound desc = could not find container \"b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94\": container with ID starting with b4adb37c5e4383a8b57864bdabbc2d70e81e5f7e31194886a9aa21eaa707fe94 not found: ID does not exist" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.830665 4662 scope.go:117] "RemoveContainer" containerID="69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072" Dec 08 09:56:15 crc kubenswrapper[4662]: E1208 09:56:15.831068 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072\": container with ID starting with 69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072 not found: ID does not exist" containerID="69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.831091 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072"} err="failed to get container status \"69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072\": rpc error: code = NotFound desc = could not find container \"69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072\": container with ID starting with 69e522fa35aa96c4dc617ab9959a1c62984b2dac2d500d4f28cf9ff075e1d072 not found: ID does not exist" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.831105 4662 scope.go:117] "RemoveContainer" containerID="34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b" Dec 08 09:56:15 crc kubenswrapper[4662]: E1208 09:56:15.831367 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b\": container with ID starting with 34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b not found: ID does not exist" containerID="34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b" Dec 08 09:56:15 crc kubenswrapper[4662]: I1208 09:56:15.831399 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b"} err="failed to get container status \"34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b\": rpc error: code = NotFound desc = could not find container \"34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b\": container with ID starting with 34b5739bc19fe8596b0bd04152d7c7b6483e52ac68acc872e436043420690b8b not found: ID does not exist" Dec 08 09:56:16 crc kubenswrapper[4662]: I1208 09:56:16.710311 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" path="/var/lib/kubelet/pods/108fa642-b3cb-402e-8199-b8e6062883b6/volumes" Dec 08 09:58:02 crc kubenswrapper[4662]: I1208 09:58:02.611920 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:58:02 crc kubenswrapper[4662]: I1208 09:58:02.613534 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:58:32 crc kubenswrapper[4662]: I1208 09:58:32.611023 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:58:32 crc kubenswrapper[4662]: I1208 09:58:32.612374 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.765308 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cz7tc"] Dec 08 09:58:45 crc kubenswrapper[4662]: E1208 09:58:45.766482 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="extract-content" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.766517 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="extract-content" Dec 08 09:58:45 crc kubenswrapper[4662]: E1208 09:58:45.766555 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="registry-server" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.766567 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="registry-server" Dec 08 09:58:45 crc kubenswrapper[4662]: E1208 09:58:45.766593 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="extract-utilities" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.766605 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="extract-utilities" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.767032 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="108fa642-b3cb-402e-8199-b8e6062883b6" containerName="registry-server" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.769711 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.778282 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cz7tc"] Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.887428 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-utilities\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.887499 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-catalog-content\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.888284 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68pbj\" (UniqueName: \"kubernetes.io/projected/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-kube-api-access-68pbj\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.990302 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-utilities\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.990377 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-catalog-content\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.990425 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68pbj\" (UniqueName: \"kubernetes.io/projected/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-kube-api-access-68pbj\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.990993 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-utilities\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:45 crc kubenswrapper[4662]: I1208 09:58:45.991062 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-catalog-content\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:46 crc kubenswrapper[4662]: I1208 09:58:46.015155 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68pbj\" (UniqueName: \"kubernetes.io/projected/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-kube-api-access-68pbj\") pod \"community-operators-cz7tc\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:46 crc kubenswrapper[4662]: I1208 09:58:46.106290 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:46 crc kubenswrapper[4662]: I1208 09:58:46.669074 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cz7tc"] Dec 08 09:58:47 crc kubenswrapper[4662]: I1208 09:58:47.035619 4662 generic.go:334] "Generic (PLEG): container finished" podID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerID="4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf" exitCode=0 Dec 08 09:58:47 crc kubenswrapper[4662]: I1208 09:58:47.035708 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cz7tc" event={"ID":"3d70b7ee-b060-42d1-85b9-57e0069e9ccf","Type":"ContainerDied","Data":"4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf"} Dec 08 09:58:47 crc kubenswrapper[4662]: I1208 09:58:47.037292 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cz7tc" event={"ID":"3d70b7ee-b060-42d1-85b9-57e0069e9ccf","Type":"ContainerStarted","Data":"47e1f17096623a624c99ab2f48bc49ae5e4c673e00614809ea78439ff2296e4e"} Dec 08 09:58:48 crc kubenswrapper[4662]: I1208 09:58:48.048105 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cz7tc" event={"ID":"3d70b7ee-b060-42d1-85b9-57e0069e9ccf","Type":"ContainerStarted","Data":"2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff"} Dec 08 09:58:49 crc kubenswrapper[4662]: I1208 09:58:49.069853 4662 generic.go:334] "Generic (PLEG): container finished" podID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerID="2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff" exitCode=0 Dec 08 09:58:49 crc kubenswrapper[4662]: I1208 09:58:49.070331 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cz7tc" event={"ID":"3d70b7ee-b060-42d1-85b9-57e0069e9ccf","Type":"ContainerDied","Data":"2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff"} Dec 08 09:58:50 crc kubenswrapper[4662]: I1208 09:58:50.080040 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cz7tc" event={"ID":"3d70b7ee-b060-42d1-85b9-57e0069e9ccf","Type":"ContainerStarted","Data":"8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d"} Dec 08 09:58:50 crc kubenswrapper[4662]: I1208 09:58:50.107555 4662 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cz7tc" podStartSLOduration=2.656631718 podStartE2EDuration="5.107534906s" podCreationTimestamp="2025-12-08 09:58:45 +0000 UTC" firstStartedPulling="2025-12-08 09:58:47.03872619 +0000 UTC m=+2650.607754200" lastFinishedPulling="2025-12-08 09:58:49.489629408 +0000 UTC m=+2653.058657388" observedRunningTime="2025-12-08 09:58:50.105262484 +0000 UTC m=+2653.674290474" watchObservedRunningTime="2025-12-08 09:58:50.107534906 +0000 UTC m=+2653.676562896" Dec 08 09:58:56 crc kubenswrapper[4662]: I1208 09:58:56.107330 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:56 crc kubenswrapper[4662]: I1208 09:58:56.108017 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:56 crc kubenswrapper[4662]: I1208 09:58:56.150625 4662 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:56 crc kubenswrapper[4662]: I1208 09:58:56.197586 4662 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:56 crc kubenswrapper[4662]: I1208 09:58:56.384397 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cz7tc"] Dec 08 09:58:58 crc kubenswrapper[4662]: I1208 09:58:58.166588 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cz7tc" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="registry-server" containerID="cri-o://8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d" gracePeriod=2 Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.083506 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.175626 4662 generic.go:334] "Generic (PLEG): container finished" podID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerID="8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d" exitCode=0 Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.175666 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cz7tc" event={"ID":"3d70b7ee-b060-42d1-85b9-57e0069e9ccf","Type":"ContainerDied","Data":"8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d"} Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.175684 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cz7tc" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.175704 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cz7tc" event={"ID":"3d70b7ee-b060-42d1-85b9-57e0069e9ccf","Type":"ContainerDied","Data":"47e1f17096623a624c99ab2f48bc49ae5e4c673e00614809ea78439ff2296e4e"} Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.175721 4662 scope.go:117] "RemoveContainer" containerID="8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.196457 4662 scope.go:117] "RemoveContainer" containerID="2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.216890 4662 scope.go:117] "RemoveContainer" containerID="4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.256683 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-utilities\") pod \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.257330 4662 scope.go:117] "RemoveContainer" containerID="8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.258050 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-utilities" (OuterVolumeSpecName: "utilities") pod "3d70b7ee-b060-42d1-85b9-57e0069e9ccf" (UID: "3d70b7ee-b060-42d1-85b9-57e0069e9ccf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.258488 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-catalog-content\") pod \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " Dec 08 09:58:59 crc kubenswrapper[4662]: E1208 09:58:59.258902 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d\": container with ID starting with 8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d not found: ID does not exist" containerID="8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.258969 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d"} err="failed to get container status \"8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d\": rpc error: code = NotFound desc = could not find container \"8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d\": container with ID starting with 8344b4f73cf79e8cb581adc368f5c0396e3c2db9bce1cace2c531c3b44a4aa1d not found: ID does not exist" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.259830 4662 scope.go:117] "RemoveContainer" containerID="2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff" Dec 08 09:58:59 crc kubenswrapper[4662]: E1208 09:58:59.261188 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff\": container with ID starting with 2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff not found: ID does not exist" containerID="2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.261223 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff"} err="failed to get container status \"2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff\": rpc error: code = NotFound desc = could not find container \"2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff\": container with ID starting with 2c149b896ef7aee78876fc22ec0a72b2c3cfc3c6afeaa6229ecbc2b5aa920dff not found: ID does not exist" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.261243 4662 scope.go:117] "RemoveContainer" containerID="4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf" Dec 08 09:58:59 crc kubenswrapper[4662]: E1208 09:58:59.261541 4662 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf\": container with ID starting with 4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf not found: ID does not exist" containerID="4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.261568 4662 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf"} err="failed to get container status \"4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf\": rpc error: code = NotFound desc = could not find container \"4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf\": container with ID starting with 4b6a49643184aec2f0ef0bc3a54cc7bf8caf1c271abd8c5e977325d46c08edaf not found: ID does not exist" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.271887 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68pbj\" (UniqueName: \"kubernetes.io/projected/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-kube-api-access-68pbj\") pod \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\" (UID: \"3d70b7ee-b060-42d1-85b9-57e0069e9ccf\") " Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.273763 4662 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.279809 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-kube-api-access-68pbj" (OuterVolumeSpecName: "kube-api-access-68pbj") pod "3d70b7ee-b060-42d1-85b9-57e0069e9ccf" (UID: "3d70b7ee-b060-42d1-85b9-57e0069e9ccf"). InnerVolumeSpecName "kube-api-access-68pbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.315178 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3d70b7ee-b060-42d1-85b9-57e0069e9ccf" (UID: "3d70b7ee-b060-42d1-85b9-57e0069e9ccf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.375542 4662 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.375824 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68pbj\" (UniqueName: \"kubernetes.io/projected/3d70b7ee-b060-42d1-85b9-57e0069e9ccf-kube-api-access-68pbj\") on node \"crc\" DevicePath \"\"" Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.509179 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cz7tc"] Dec 08 09:58:59 crc kubenswrapper[4662]: I1208 09:58:59.518313 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cz7tc"] Dec 08 09:59:00 crc kubenswrapper[4662]: I1208 09:59:00.710555 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" path="/var/lib/kubelet/pods/3d70b7ee-b060-42d1-85b9-57e0069e9ccf/volumes" Dec 08 09:59:02 crc kubenswrapper[4662]: I1208 09:59:02.611422 4662 patch_prober.go:28] interesting pod/machine-config-daemon-5dzps container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 09:59:02 crc kubenswrapper[4662]: I1208 09:59:02.611877 4662 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 09:59:02 crc kubenswrapper[4662]: I1208 09:59:02.611939 4662 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" Dec 08 09:59:02 crc kubenswrapper[4662]: I1208 09:59:02.614408 4662 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e29e3cd82914bc72ad862d3f34f10f62625c6c1558dc9e8796600fcac6c2a6a"} pod="openshift-machine-config-operator/machine-config-daemon-5dzps" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 09:59:02 crc kubenswrapper[4662]: I1208 09:59:02.614548 4662 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" podUID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerName="machine-config-daemon" containerID="cri-o://2e29e3cd82914bc72ad862d3f34f10f62625c6c1558dc9e8796600fcac6c2a6a" gracePeriod=600 Dec 08 09:59:03 crc kubenswrapper[4662]: I1208 09:59:03.495606 4662 generic.go:334] "Generic (PLEG): container finished" podID="0e629796-86fa-4436-8a01-326fc70c7dc1" containerID="2e29e3cd82914bc72ad862d3f34f10f62625c6c1558dc9e8796600fcac6c2a6a" exitCode=0 Dec 08 09:59:03 crc kubenswrapper[4662]: I1208 09:59:03.495639 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerDied","Data":"2e29e3cd82914bc72ad862d3f34f10f62625c6c1558dc9e8796600fcac6c2a6a"} Dec 08 09:59:03 crc kubenswrapper[4662]: I1208 09:59:03.496224 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-5dzps" event={"ID":"0e629796-86fa-4436-8a01-326fc70c7dc1","Type":"ContainerStarted","Data":"1600e9fcf28b04885ff44f916d8cf1ebba3c2af761c4b8128b3f1decd200deb8"} Dec 08 09:59:03 crc kubenswrapper[4662]: I1208 09:59:03.496248 4662 scope.go:117] "RemoveContainer" containerID="ddd2b606aace53c89bfc37b3e836db964e6673dcb6c04c3d96da9c5f6262261a" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.149858 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw"] Dec 08 10:00:00 crc kubenswrapper[4662]: E1208 10:00:00.150799 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="extract-content" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.150815 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="extract-content" Dec 08 10:00:00 crc kubenswrapper[4662]: E1208 10:00:00.150848 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="registry-server" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.150857 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="registry-server" Dec 08 10:00:00 crc kubenswrapper[4662]: E1208 10:00:00.150886 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="extract-utilities" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.150894 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="extract-utilities" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.151084 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d70b7ee-b060-42d1-85b9-57e0069e9ccf" containerName="registry-server" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.151826 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.153839 4662 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.154120 4662 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.181681 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw"] Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.250531 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwvgr\" (UniqueName: \"kubernetes.io/projected/865a19c7-38f9-481b-8fe6-83e3162b273c-kube-api-access-vwvgr\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.250592 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/865a19c7-38f9-481b-8fe6-83e3162b273c-secret-volume\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.250676 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/865a19c7-38f9-481b-8fe6-83e3162b273c-config-volume\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.353004 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwvgr\" (UniqueName: \"kubernetes.io/projected/865a19c7-38f9-481b-8fe6-83e3162b273c-kube-api-access-vwvgr\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.353071 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/865a19c7-38f9-481b-8fe6-83e3162b273c-secret-volume\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.353153 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/865a19c7-38f9-481b-8fe6-83e3162b273c-config-volume\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.354100 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/865a19c7-38f9-481b-8fe6-83e3162b273c-config-volume\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.363894 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/865a19c7-38f9-481b-8fe6-83e3162b273c-secret-volume\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.376783 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwvgr\" (UniqueName: \"kubernetes.io/projected/865a19c7-38f9-481b-8fe6-83e3162b273c-kube-api-access-vwvgr\") pod \"collect-profiles-29419800-pm9zw\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:00 crc kubenswrapper[4662]: I1208 10:00:00.521363 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:01 crc kubenswrapper[4662]: I1208 10:00:01.028721 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw"] Dec 08 10:00:01 crc kubenswrapper[4662]: I1208 10:00:01.038144 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" event={"ID":"865a19c7-38f9-481b-8fe6-83e3162b273c","Type":"ContainerStarted","Data":"28a71d509734fb696b5b4394635b7d911e3c8ddfc9af315c90541daa4356956d"} Dec 08 10:00:02 crc kubenswrapper[4662]: I1208 10:00:02.049733 4662 generic.go:334] "Generic (PLEG): container finished" podID="865a19c7-38f9-481b-8fe6-83e3162b273c" containerID="5b195c37cd0d32842de7062d8bf7ef36d17b0ab387906236fab9a4eda245624f" exitCode=0 Dec 08 10:00:02 crc kubenswrapper[4662]: I1208 10:00:02.049859 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" event={"ID":"865a19c7-38f9-481b-8fe6-83e3162b273c","Type":"ContainerDied","Data":"5b195c37cd0d32842de7062d8bf7ef36d17b0ab387906236fab9a4eda245624f"} Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.414626 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.513881 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwvgr\" (UniqueName: \"kubernetes.io/projected/865a19c7-38f9-481b-8fe6-83e3162b273c-kube-api-access-vwvgr\") pod \"865a19c7-38f9-481b-8fe6-83e3162b273c\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.513976 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/865a19c7-38f9-481b-8fe6-83e3162b273c-config-volume\") pod \"865a19c7-38f9-481b-8fe6-83e3162b273c\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.514054 4662 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/865a19c7-38f9-481b-8fe6-83e3162b273c-secret-volume\") pod \"865a19c7-38f9-481b-8fe6-83e3162b273c\" (UID: \"865a19c7-38f9-481b-8fe6-83e3162b273c\") " Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.515337 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/865a19c7-38f9-481b-8fe6-83e3162b273c-config-volume" (OuterVolumeSpecName: "config-volume") pod "865a19c7-38f9-481b-8fe6-83e3162b273c" (UID: "865a19c7-38f9-481b-8fe6-83e3162b273c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.521016 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/865a19c7-38f9-481b-8fe6-83e3162b273c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "865a19c7-38f9-481b-8fe6-83e3162b273c" (UID: "865a19c7-38f9-481b-8fe6-83e3162b273c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.524977 4662 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/865a19c7-38f9-481b-8fe6-83e3162b273c-kube-api-access-vwvgr" (OuterVolumeSpecName: "kube-api-access-vwvgr") pod "865a19c7-38f9-481b-8fe6-83e3162b273c" (UID: "865a19c7-38f9-481b-8fe6-83e3162b273c"). InnerVolumeSpecName "kube-api-access-vwvgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.615767 4662 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwvgr\" (UniqueName: \"kubernetes.io/projected/865a19c7-38f9-481b-8fe6-83e3162b273c-kube-api-access-vwvgr\") on node \"crc\" DevicePath \"\"" Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.615806 4662 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/865a19c7-38f9-481b-8fe6-83e3162b273c-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 10:00:03 crc kubenswrapper[4662]: I1208 10:00:03.615818 4662 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/865a19c7-38f9-481b-8fe6-83e3162b273c-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 10:00:04 crc kubenswrapper[4662]: I1208 10:00:04.067972 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" event={"ID":"865a19c7-38f9-481b-8fe6-83e3162b273c","Type":"ContainerDied","Data":"28a71d509734fb696b5b4394635b7d911e3c8ddfc9af315c90541daa4356956d"} Dec 08 10:00:04 crc kubenswrapper[4662]: I1208 10:00:04.068023 4662 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28a71d509734fb696b5b4394635b7d911e3c8ddfc9af315c90541daa4356956d" Dec 08 10:00:04 crc kubenswrapper[4662]: I1208 10:00:04.068094 4662 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29419800-pm9zw" Dec 08 10:00:04 crc kubenswrapper[4662]: I1208 10:00:04.489007 4662 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b"] Dec 08 10:00:04 crc kubenswrapper[4662]: I1208 10:00:04.496481 4662 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29419755-m4q6b"] Dec 08 10:00:04 crc kubenswrapper[4662]: I1208 10:00:04.707940 4662 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49a2742-5f89-4a17-a477-dffb8db27f9c" path="/var/lib/kubelet/pods/a49a2742-5f89-4a17-a477-dffb8db27f9c/volumes" Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.802835 4662 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rhlvq"] Dec 08 10:00:18 crc kubenswrapper[4662]: E1208 10:00:18.803594 4662 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="865a19c7-38f9-481b-8fe6-83e3162b273c" containerName="collect-profiles" Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.803607 4662 state_mem.go:107] "Deleted CPUSet assignment" podUID="865a19c7-38f9-481b-8fe6-83e3162b273c" containerName="collect-profiles" Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.803790 4662 memory_manager.go:354] "RemoveStaleState removing state" podUID="865a19c7-38f9-481b-8fe6-83e3162b273c" containerName="collect-profiles" Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.805053 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.822339 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhlvq"] Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.899997 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9123ca0f-197f-43fa-821d-bb31434ef1f6-catalog-content\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.900061 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt86l\" (UniqueName: \"kubernetes.io/projected/9123ca0f-197f-43fa-821d-bb31434ef1f6-kube-api-access-rt86l\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:18 crc kubenswrapper[4662]: I1208 10:00:18.900114 4662 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9123ca0f-197f-43fa-821d-bb31434ef1f6-utilities\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.001407 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9123ca0f-197f-43fa-821d-bb31434ef1f6-catalog-content\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.001478 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rt86l\" (UniqueName: \"kubernetes.io/projected/9123ca0f-197f-43fa-821d-bb31434ef1f6-kube-api-access-rt86l\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.001555 4662 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9123ca0f-197f-43fa-821d-bb31434ef1f6-utilities\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.002033 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9123ca0f-197f-43fa-821d-bb31434ef1f6-catalog-content\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.002072 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9123ca0f-197f-43fa-821d-bb31434ef1f6-utilities\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.020718 4662 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rt86l\" (UniqueName: \"kubernetes.io/projected/9123ca0f-197f-43fa-821d-bb31434ef1f6-kube-api-access-rt86l\") pod \"redhat-operators-rhlvq\" (UID: \"9123ca0f-197f-43fa-821d-bb31434ef1f6\") " pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.127555 4662 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rhlvq" Dec 08 10:00:19 crc kubenswrapper[4662]: I1208 10:00:19.790648 4662 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rhlvq"] Dec 08 10:00:20 crc kubenswrapper[4662]: I1208 10:00:20.200756 4662 generic.go:334] "Generic (PLEG): container finished" podID="9123ca0f-197f-43fa-821d-bb31434ef1f6" containerID="704540ee171270f2ec402b9fdb004d055322574575d821905153f94c6ce3d3d1" exitCode=0 Dec 08 10:00:20 crc kubenswrapper[4662]: I1208 10:00:20.200918 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhlvq" event={"ID":"9123ca0f-197f-43fa-821d-bb31434ef1f6","Type":"ContainerDied","Data":"704540ee171270f2ec402b9fdb004d055322574575d821905153f94c6ce3d3d1"} Dec 08 10:00:20 crc kubenswrapper[4662]: I1208 10:00:20.201012 4662 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rhlvq" event={"ID":"9123ca0f-197f-43fa-821d-bb31434ef1f6","Type":"ContainerStarted","Data":"6c8eab5f56644f31dd6c5993248043cf8875ce7c6d05395db6fa40a814acf24c"}